[HADOOP] 데이터를 HDFS로 복사 할 때 createBlockOutputStream의 예외
HADOOP데이터를 HDFS로 복사 할 때 createBlockOutputStream의 예외
데이터를 HDFS로 복사하는 동안 아래 경고 메시지가 표시됩니다. 6 개의 노드 클러스터가 실행 중입니다. 복사 중에는 두 노드를 무시하고 아래 경고 메시지를 표시합니다.
INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Bad connect ack with firstBadLink as 192.168.226.136:50010
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1116)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1039)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487)
13/11/04 05:02:15 INFO hdfs.DFSClient: Abandoning BP-603619794-127.0.0.1-1376359904614:blk_-7294477166306619719_1917
13/11/04 05:02:15 INFO hdfs.DFSClient: Excluding datanode 192.168.226.136:50010
데이터 노드 로그
2014-02-07 04:22:01,953 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.io.IOException: Failed on local exception: java.io.IOException: Connection reset by peer; Host Details : local host is: "datanode4/192.168.226.136"; destination host is: "namenode":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:763)
at org.apache.hadoop.ipc.Client.call(Client.java:1235)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at sun.proxy.$Proxy10.sendHeartbeat(Unknown Source)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
at sun.proxy.$Proxy10.sendHeartbeat(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:170)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:441)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:521)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:673)
at java.lang.Thread.run(Thread.java:679)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251)
at sun.nio.ch.IOUtil.read(IOUtil.java:224)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:56)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:143)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:156)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:411)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:276)
at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:760)
at com.google.protobuf.AbstractMessageLite$Builder.mergeDelimitedFrom(AbstractMessageLite.java:288)
at com.google.protobuf.AbstractMessage$Builder.mergeDelimitedFrom(AbstractMessage.java:752)
at org.apache.hadoop.ipc.protobuf.RpcPayloadHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcPayloadHeaderProtos.java:985)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:941)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:839)
2014-02-07 04:22:04,780 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:05,783 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:06,785 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:07,788 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:08,791 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:09,794 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:10,796 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:11,798 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:12,802 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:13,813 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.226.129:8020. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2014-02-07 04:22:13,818 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.net.ConnectException: Call From datanode4/192.168.226.136 to namenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
나는 datanode에서 SSH를 해보려고 노력했다. 누구든지이 하나를 기쁘게 할 수 있겠습니까?
다른 세부 정보가 필요한 경우 알려 주시기 바랍니다.
해결법
-
==============================
1.방화벽을 확인하십시오
방화벽을 확인하십시오
이 블로그에 대한 자세한 내용 : http://ahikmat.blogspot.kr/2014/05/three-essential-things-to-do-while.html
-
==============================
2.연결할 수없는 노드에 대해 TCP 포트를 열어야합니다.
연결할 수없는 노드에 대해 TCP 포트를 열어야합니다.
해결책: TCP 포트 범위 0-65535를 보안 그룹에 추가하여 문제를 해결하십시오.
from https://stackoverflow.com/questions/21718900/exception-in-createblockoutputstream-when-copying-data-into-hdfs by cc-by-sa and MIT license
'HADOOP' 카테고리의 다른 글
[HADOOP] Hadoop MapReduce에서 Map 출력 값으로 Object를 어떻게 설정합니까? (0) | 2019.06.11 |
---|---|
[HADOOP] 로컬로 스파크 작업을 실행할 때 "Scheme : gs에 대한 파일 시스템 없음" (0) | 2019.06.11 |
[HADOOP] Spark / Python에서 누락 된 누락 값 전달 (0) | 2019.06.11 |
[HADOOP] Apache Spark을 사용하여 pdf / audio / video 파일 (구조화되지 않은 데이터)을 읽을 수 있습니까? (0) | 2019.06.11 |
[HADOOP] 와일드 카드가있는 Hadoop HDFS 사본? (0) | 2019.06.11 |