복붙노트

[HADOOP] 하둡 : java.lang.Exception가 : java.lang.RuntimeException가 : 구성 개체 오류

HADOOP

하둡 : java.lang.Exception가 : java.lang.RuntimeException가 : 구성 개체 오류

먼저 도움을 주셔서 감사합니다. 지도 클래스에서는, 나는 다른 클래스의 WebPageToText을 인스턴스화. 내 첫 번째 질문 : 하둡의 코드를 실행할 때지도 클래스에서 인쇄가 나타납니다? 두번째 질문 : 이 오류와 함께 도와주세요.

나는이 문제가 계속 :

    14/04/02 20:39:35 INFO util.NativeCodeLoader: Loaded the native-hadoop library
    14/04/02 20:39:36 WARN snappy.LoadSnappy: Snappy native library is available
    14/04/02 20:39:36 INFO snappy.LoadSnappy: Snappy native library loaded
    14/04/02 20:39:36 INFO mapred.FileInputFormat: Total input paths to process : 1
    14/04/02 20:39:36 INFO mapred.JobClient: Running job: job_local1947041074_0001
    14/04/02 20:39:36 INFO mapred.LocalJobRunner: Waiting for map tasks
    14/04/02 20:39:36 INFO mapred.LocalJobRunner: Starting task: attempt_local1947041074_0001_m_000000_0
    14/04/02 20:39:36 INFO util.ProcessTree: setsid exited with exit code 0
    14/04/02 20:39:36 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1cf7491
    14/04/02 20:39:36 INFO mapred.MapTask: Processing split: file:/usr/local/hadoop/project/input1/url.txt:0+68
    14/04/02 20:39:36 INFO mapred.MapTask: numReduceTasks: 1
    14/04/02 20:39:36 INFO mapred.MapTask: io.sort.mb = 100
    14/04/02 20:39:36 INFO mapred.MapTask: data buffer = 79691776/99614720
    14/04/02 20:39:36 INFO mapred.MapTask: record buffer = 262144/327680
    14/04/02 20:39:36 INFO mapred.LocalJobRunner: Map task executor complete.
    14/04/02 20:39:36 WARN mapred.LocalJobRunner: job_local1947041074_0001
    java.lang.Exception: java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:354)
    Caused by: java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:93)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:64)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:426)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:223)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:701)
    Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:622)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:88)
        ... 11 more
    Caused by: java.lang.NoClassDefFoundError: de/l3s/boilerpipe/BoilerpipeProcessingException
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:270)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:855)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:881)
        at org.apache.hadoop.mapred.JobConf.getMapperClass(JobConf.java:968)
        at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:34)
        ... 16 more
    Caused by: java.lang.ClassNotFoundException: de.l3s.boilerpipe.BoilerpipeProcessingException
        at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        ... 23 more
    14/04/02 20:39:37 INFO mapred.JobClient:  map 0% reduce 0%
    14/04/02 20:39:37 INFO mapred.JobClient: Job complete: job_local1947041074_0001
    14/04/02 20:39:37 INFO mapred.JobClient: Counters: 0
    14/04/02 20:39:37 INFO mapred.JobClient: Job Failed: NA
    Exception in thread "main" java.io.IOException: Job failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
        at webPageToTxt.ConfMain.run(ConfMain.java:33)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at webPageToTxt.ConfMain.main(ConfMain.java:40)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:622)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)

conf의 클래스 :     패키지 webPageToTxt;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;


public class ConfMain extends Configured implements Tool{

    public int run(String[] args) throws Exception
    {

          //creating a JobConf object and assigning a job name for identification purposes
          JobConf conf = new JobConf(getConf(), ConfMain.class);
          conf.setJobName("webpage to txt");

          //Setting configuration object with the Data Type of output Key and Value
          conf.setOutputKeyClass(Text.class);
          conf.setOutputValueClass(Text.class);

          //Providing the mapper and reducer class names
          conf.setMapperClass(WebPageToTxtMapper.class);
          conf.setReducerClass(WebPageToTxtReducer.class);

          //the hdfs input and output directory to be fetched from the command line
          FileInputFormat.addInputPath(conf, new Path(args[0]));
          FileOutputFormat.setOutputPath(conf, new Path(args[1]));

          JobClient.runJob(conf);
          System.out.println("configuration is done");
          return 0;
    }


    public static void main(String[] args) throws Exception {
         int res = ToolRunner.run(new Configuration(), new ConfMain(),args);
         System.exit(res);
    }
}

지도 클래스

package webPageToTxt;

import java.io.IOException;
import java.util.Scanner;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;

import de.l3s.boilerpipe.BoilerpipeProcessingException;


public class WebPageToTxtMapper extends MapReduceBase implements Mapper<Text, Text, Text, Text>
{
         private Text url = new Text();
         private Text wordList = new Text();
      //map method that performs the tokenizer job and framing the initial key value pairs
      public void map(Text key, Text value, OutputCollector<Text, Text> output, Reporter reporter) throws IOException
      {
            try {
                System.out.println("Prepare to get into webpage");
//              String val = WebPageToTxt.webPageToTxt("http://en.wikipedia.org/wiki/Sun\nhttp://en.wikipedia.org/wiki/Earth");

                String val = WebPageToTxt.webPageToTxt(value.toString());
                System.out.println("Webpage main function implemented");

                Scanner scanner = new Scanner(val);
                while (scanner.hasNextLine()) {
                    String line = scanner.nextLine();
                      // process the line
                    String[] arr = line.split("`", 2);
                    url.set(arr[0]);
                    wordList.set(line);
                    output.collect(url, wordList);
                }
                scanner.close();
            } catch (BoilerpipeProcessingException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
      }
}

해결법

  1. ==============================

    1.매퍼에서 인쇄 문이나 출력을 사용 JobTracker에 웹 UI를보기 위해, 주요 작업 출력에 표시되지 않습니다 방법을 감소

    매퍼에서 인쇄 문이나 출력을 사용 JobTracker에 웹 UI를보기 위해, 주요 작업 출력에 표시되지 않습니다 방법을 감소

    종류 열에서 축소 />지도를 클릭하세요 - - 작업 ID 열에서 올바른 작업 ID를 선택> 각각의지도를 클릭 / 작업 감소 -> 모든 Tasklog의 clolumn을 클릭

    당신이 매퍼 방법의 클래스 de.l3s.boilerpipe.BoilerpipeProcessingException을 사용하고 있기 때문에 당신은 당신이 하둡 명령을 사용하여이 응용 프로그램을 exeuting하는 경우 그것을 위해 -libjars 일반 옵션을 활용, 분산 방식으로이 클래스를 사용할 수 있도록 할 필요가  또는 간단하게 메인 항아리 자체와 함께 클래스 de.l3s.boilerpipe.BoilerpipeProcessingException가 들어있는 항아리 팩 (ToolRunner 클래스를 구현해야합니다).

  2. from https://stackoverflow.com/questions/22825888/hadoop-java-lang-exception-java-lang-runtimeexception-error-in-configuring-ob by cc-by-sa and MIT license