[HADOOP] 하둡 예를 실행할 때, 나는 ".staging / job_1541144755485_0002 / job.splitmetainfo 존재하지 않는다"는, 내가 무엇을 할 수 발생?
HADOOP하둡 예를 실행할 때, 나는 ".staging / job_1541144755485_0002 / job.splitmetainfo 존재하지 않는다"는, 내가 무엇을 할 수 발생?
내 구성은 다음과 같습니다 : 나는 pc720 (10.10.1.1)과 pc719 (10.10.1.2) 각각 하둡 실험이 기계를 사용했다. JDK (버전 1.8.0_181)는 apt-get을 설치됩니다. Hadoop2.7.1는 https://archive.apache.org/dist/hadoop/common/hadoop-2.7.1/에서 다운로드 할 수 있습니다 및 / opt를 넣어 /
1 단계: 나는 추가하는 /etc/bash.bashrc 구성
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=${JAVA_HOME}/bin:${PATH}
export HADOOP_HOME=/opt/hadoop-2.7.1
export PATH=${HADOOP_HOME}/bin:${PATH}
export PATH=${HADOOP_HOME}/sbin:${PATH}
다음 "소스의 / etc / 프로필"실행
2 단계: 나는 XML을 구성합니다
노예 :
10.10.1.2
코어-site.xml 파일
<property>
<name>fs.defautFS</name>
<value>hdfs://10.10.1.1:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/root/hadoop_store/tmp</value>
</property>
HDFS-site.xml 파일
<property>
<name>dfs:replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/root/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/root/hadoop_store/hdfs/datanode</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>10.10.1.1:9001</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>10.10.1.1:8080</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
Mapred-site.xml 파일
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.address</name>
<value>10.10.1.1:9002</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>10.10.1.1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>10.10.1.1:19888</value>
</property>
<property>
<name>mapred.acls.enabled</name>
<value>true</value>
</property>
원사를 site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>10.10.1.1</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.https.address</name>
<value>${yarn.resourcemanager.hostname}:8090</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8182</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
3 단계 : 에서 / 루트 / 여러 디렉토리를 설립 :
mkdir hadoop_store
mkdir hadoop_store/hdfs
mkdir hadoop_store/tmp
mkdir hadoop_store/hdfs/datanode
mkdir hadoop_store/hdfs/namenode
그럼, /opt/Hadoop-2.7.1/bin로 바뀌 실행
./hdfs namenode –format
cd ..
cd sbin/
./start-all.sh
./mr-jobhistory-daemon.sh start historyserver
실행 JPS, pc720 쇼 후
pc719 쇼
여기에 도달, 내 hadoop2.7.1가 성공적으로 설치 및 구성되었습니다 생각합니다. 그러나 문제가되었다.
나는 /opt/hadoop-2.7.1/share/hadoop/mapreduce/하는 쇼있는 걸까요?
그럼 난 하둡 항아리 하둡 - 맵리 듀스 - 예-2.7.1.jar 파이 2를 실행
로그는 다음과 같습니다
Number of Maps = 2
Samples per Map = 2
Wrote input for Map #0
Wrote input for Map #1
Starting Job
18/11/02 08:14:48 INFO client.RMProxy: Connecting to ResourceManager at /10.10.1.1:8032
18/11/02 08:14:48 INFO input.FileInputFormat: Total input paths to process : 2
18/11/02 08:14:48 INFO mapreduce.JobSubmitter: number of splits:2
18/11/02 08:14:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1541144755485_0002
18/11/02 08:14:49 INFO impl.YarnClientImpl: Submitted application application_1541144755485_0002
18/11/02 08:14:49 INFO mapreduce.Job: The url to track the job: http://node-0-link-0:8088/proxy/application_1541144755485_0002/
18/11/02 08:14:49 INFO mapreduce.Job: Running job: job_1541144755485_0002
18/11/02 08:14:53 INFO mapreduce.Job: Job job_1541144755485_0002 running in uber mode : false
18/11/02 08:14:53 INFO mapreduce.Job: map 0% reduce 0%
18/11/02 08:14:53 INFO mapreduce.Job: Job job_1541144755485_0002 failed with state FAILED due to: Application application_1541144755485_0002 failed 2 times due to AM Container for appattempt_1541144755485_0002_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://node-0-link-0:8088/cluster/app/application_1541144755485_0002Then, click on links to logs of each attempt.
Diagnostics: File file:/tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo does not exist
java.io.FileNotFoundException: File file:/tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Failing this attempt. Failing the application.
18/11/02 08:14:53 INFO mapreduce.Job: Counters: 0
Job Finished in 5.08 seconds
java.io.FileNotFoundException: File file:/opt/hadoop-2.7.1/share/hadoop/mapreduce/QuasiMonteCarlo_1541168087724_1532373667/out/reduce-out does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:606)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:819)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:596)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1752)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1776)
at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
나는 많은 솔루션을 시도했지만 쓸모없는 듯했다. 이 문제는 약 1 주일 동안 저를 혼동하고있다. 내 구성에서 실수가 있습니까? 내가 무엇을 할 수 있을지? 제발 도와주세요. 감사!
해결법
-
==============================
1.나는 당신이 놓친 믿고 / A 최종 offs.defaultFS 값이 될 것이다 <값> HDFS : //10.10.1.1 : 9000 / 값>
나는 당신이 놓친 믿고 / A 최종 offs.defaultFS 값이 될 것이다 <값> HDFS : //10.10.1.1 : 9000 / 값>
일반적 HDFS 유지된다 yarn.app.mapreduce.am.staging-DIR 의해 구성되는 것을 유의 /tmp/hadoop-yarn/staging/root/.staging/job_1541144755485_0002/job.splitmetainfo. 이것은 우리에게 HDFS의 문제가 힌트를 제공합니다.
자료 제공 : https://www.cloudera.com/documentation/enterprise/5-15-x/topics/cdh_ig_hdfs_cluster_deploy.html
-
==============================
2.하둡 DFS의 -ls / tmp를 실행함으로써, 표시
하둡 DFS의 -ls / tmp를 실행함으로써, 표시
Found 17 items drwxrwxrwt - root root 4096 2018-11-01 22:33 /tmp/.ICE-unix drwxrwxrwt - root root 4096 2018-11-01 22:33 /tmp/.Test-unix drwxrwxrwt - root root 4096 2018-11-01 22:33 /tmp/.X11-unix drwxrwxrwt - root root 4096 2018-11-01 22:33 /tmp/.XIM-unix drwxrwxrwt - root root 4096 2018-11-01 22:33 /tmp/.font-unix drwxr-xr-x - root root 4096 2018-11-04 23:45 /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08 drwxr-xr-x - root root 4096 2018-11-04 23:45 /tmp/Jetty_10_10_1_1_8088_cluster____.kglkoh drwxr-xr-x - root root 4096 2018-11-04 23:46 /tmp/Jetty_node.0.link.0_19888_jobhistory____.ckp60y drwxr-xr-x - root root 4096 2018-11-04 23:45 /tmp/Jetty_node.0.link.0_9001_secondary____.h648yx drwxr-xr-x - root root 4096 2018-11-02 01:13 /tmp/hadoop-root -rw-r--r-- 1 root root 6 2018-11-04 23:45 /tmp/hadoop-root-namenode.pid -rw-r--r-- 1 root root 6 2018-11-04 23:45 /tmp/hadoop-root-secondarynamenode.pid drwxr-xr-x - root root 4096 2018-11-02 01:14 /tmp/hadoop-yarn drwxr-xr-x - root root 4096 2018-11-05 00:02 /tmp/hsperfdata_root -rw-r--r-- 1 root root 6 2018-11-04 23:46 /tmp/mapred-root-historyserver.pid -rw-r--r-- 1 root root 388 2018-11-01 22:33 /tmp/ntp.conf.new -rw-r--r-- 1 root root 6
from https://stackoverflow.com/questions/53147682/when-running-hadoop-example-i-encountered-staging-job-1541144755485-0002-job by cc-by-sa and MIT license
'HADOOP' 카테고리의 다른 글
[HADOOP] Oozie 작업을 강제로 특정 노드에서 실행하기 (0) | 2019.09.18 |
---|---|
[HADOOP] 하이브 테이블의 컬럼과 같은 파일 이름의 일부 (0) | 2019.09.18 |
[HADOOP] HDFS에서 Unpickle 파일 (0) | 2019.09.18 |
[HADOOP] CSV에서로드 Sqoop을 함께 표를 하이브 파일? (0) | 2019.09.18 |
[HADOOP] 전제 스파크 자바 프로그램에 독립을 사용하여 GCS 파일 읽기 (0) | 2019.09.18 |