복붙노트

[HADOOP] ExitCodeException는 네임 노드를 응시하면서

HADOOP

ExitCodeException는 네임 노드를 응시하면서

나는 솔라리스 10 서버에서 하둡 구성을 가지고있다. 나는이 서버에 하둡 2.7.1을 구성했습니다. 지금은 start-dfs.sh 데이터 노드를 사용하여 secondaryNamenode가 시작되지만 네임 노드가 시작되지 의해 하둡 데몬을 시작하고 때. 내가 네임 노드의 로그를 확인, 그것은 나에게 다음과 같은 오류 메시지를 제공합니다 :

2015-12-08 16:24:47,703 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = psdrac2/192.168.106.109
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.1
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a; compiled by 'jenkins' on 2015-06-29T06:04Z
STARTUP_MSG:   java = 1.8.0_66
************************************************************/
2015-12-08 16:24:47,798 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-12-08 16:24:47,832 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode
2015-12-08 16:24:50,310 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-12-08 16:24:50,977 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-12-08 16:24:50,978 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-12-08 16:24:50,998 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://psdrac2:9000
2015-12-08 16:24:51,005 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use psdrac2:9000 to access this namenode/service.
2015-12-08 16:24:51,510 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-12-08 16:24:52,680 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
2015-12-08 16:24:53,177 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-12-08 16:24:53,239 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2015-12-08 16:24:53,289 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-12-08 16:24:53,336 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-12-08 16:24:53,354 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-12-08 16:24:53,355 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-12-08 16:24:53,356 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-12-08 16:24:53,544 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-12-08 16:24:53,556 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-12-08 16:24:53,673 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-12-08 16:24:53,674 INFO org.mortbay.log: jetty-6.1.26
2015-12-08 16:24:56,059 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-12-08 16:24:56,310 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,313 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,315 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-08 16:24:56,315 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-12-08 16:24:56,362 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,364 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:24:56,701 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-12-08 16:24:56,702 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-12-08 16:24:57,154 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-12-08 16:24:57,155 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-12-08 16:24:57,171 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-12-08 16:24:57,191 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Dec 08 16:24:57
2015-12-08 16:24:57,215 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-12-08 16:24:57,216 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:57,232 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB = 17.8 MB
2015-12-08 16:24:57,233 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-12-08 16:24:57,368 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-12-08 16:24:57,370 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2015-12-08 16:24:57,370 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-12-08 16:24:57,371 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-12-08 16:24:57,371 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-12-08 16:24:57,371 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-12-08 16:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-12-08 16:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-12-08 16:24:57,372 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-12-08 16:24:57,422 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
2015-12-08 16:24:57,423 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-12-08 16:24:57,423 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-12-08 16:24:57,424 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-12-08 16:24:57,435 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-12-08 16:24:58,543 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-12-08 16:24:58,543 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:58,544 INFO org.apache.hadoop.util.GSet: 1.0% max memory 889 MB = 8.9 MB
2015-12-08 16:24:58,544 INFO org.apache.hadoop.util.GSet: capacity      = 2^20 = 1048576 entries
2015-12-08 16:24:58,554 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: ACLs enabled? false
2015-12-08 16:24:58,554 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: XAttrs enabled? true
2015-12-08 16:24:58,555 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Maximum size of an xattr: 16384
2015-12-08 16:24:58,556 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-12-08 16:24:58,625 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-12-08 16:24:58,625 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:58,626 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB = 2.2 MB
2015-12-08 16:24:58,626 INFO org.apache.hadoop.util.GSet: capacity      = 2^18 = 262144 entries
2015-12-08 16:24:58,640 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-12-08 16:24:58,641 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-12-08 16:24:58,641 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-12-08 16:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-12-08 16:24:58,665 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2015-12-08 16:24:58,666 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-12-08 16:24:58,677 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-12-08 16:24:58,678 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-12-08 16:24:58,695 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-12-08 16:24:58,696 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2015-12-08 16:24:58,697 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
2015-12-08 16:24:58,697 INFO org.apache.hadoop.util.GSet: capacity      = 2^15 = 32768 entries
2015-12-08 16:24:58,790 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/in_use.lock acquired by nodename 15020@psdrac2
2015-12-08 16:24:59,268 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Recovering unfinalized segments in /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current
2015-12-08 16:24:59,272 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2015-12-08 16:24:59,600 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2015-12-08 16:24:59,878 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2015-12-08 16:24:59,879 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/fsimage_0000000000000000000
2015-12-08 16:24:59,956 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
2015-12-08 16:24:59,958 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Starting log segment at 1
2015-12-08 16:25:01,370 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-12-08 16:25:01,371 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 2645 msecs
2015-12-08 16:25:03,759 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to psdrac2:9000
2015-12-08 16:25:03,809 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-12-08 16:25:03,909 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2015-12-08 16:25:04,108 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2015-12-08 16:25:04,116 WARN org.apache.hadoop.hdfs.server.common.Util: Path /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2015-12-08 16:25:04,169 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2015-12-08 16:25:04,170 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Ending log segment 1
2015-12-08 16:25:04,173 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 5 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 25 
2015-12-08 16:25:04,184 INFO org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits file /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/edits_inprogress_0000000000000000001 -> /u03/recoverdest/hadoop_install/hadoop-2.7.1/data/namenode/current/edits_0000000000000000001-0000000000000000002
2015-12-08 16:25:04,202 INFO org.apache.hadoop.ipc.Server: Stopping server on 9000
2015-12-08 16:25:04,294 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for active state
2015-12-08 16:25:04,294 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2015-12-08 16:25:04,315 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-12-08 16:25:04,329 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-12-08 16:25:04,333 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-12-08 16:25:04,335 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-12-08 16:25:04,380 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
ExitCodeException exitCode=1:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
    at org.apache.hadoop.util.Shell.run(Shell.java:456)
    at org.apache.hadoop.fs.DF.getFilesystem(DF.java:76)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1058)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:678)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:664)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795)at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)2015-12-05 16:46:08,229 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-12-05 16:46:08,239 INFOorg.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
2015-12-08 16:25:04,418 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at psdrac2/192.168.106.109

왜이 오류는 무엇입니까?

해결법

  1. ==============================

    1.당신은 Solaris 서버 당신 'DF -k -P'명령을 사용해 볼 수 있습니까? 당신이 당신의 솔라리스 서버를 보장하기 위해 필요한이 나던 작품은 -P 명령을 지원합니다 "는 / usr / XPG4 / 빈 / DF"에 대한 기본 df 명령에 연결됩니다.

    당신은 Solaris 서버 당신 'DF -k -P'명령을 사용해 볼 수 있습니까? 당신이 당신의 솔라리스 서버를 보장하기 위해 필요한이 나던 작품은 -P 명령을 지원합니다 "는 / usr / XPG4 / 빈 / DF"에 대한 기본 df 명령에 연결됩니다.

  2. ==============================

    2.다음 서비스를 시작하려고 mapred-site.xml 파일에서 mapred.job.tracker 속성을 제거

    다음 서비스를 시작하려고 mapred-site.xml 파일에서 mapred.job.tracker 속성을 제거

    원사-site.xml 파일에 매개 변수 아래에 추가

    <특성>     <이름> yarn.resourcemanager.resource-tracker.address     <값> 주 : 8025          <특성>     <이름> yarn.resourcemanager.scheduler.address     <값> psdrac2 : 8030          <특성>     <이름> yarn.resourcemanager.address     <값> psdrac2 : 8040     

  3. ==============================

    3.@ 로한의 대답은 정확했다. 나는 솔라리스 10에서 똑같은 문제를 가지고 있었다. 여기에 근본 원인의 심층 분석입니다.

    @ 로한의 대답은 정확했다. 나는 솔라리스 10에서 똑같은 문제를 가지고 있었다. 여기에 근본 원인의 심층 분석입니다.

    DF.java (라인 : 144) 명령을 실행하려고합니다.

    return new String[] {"bash","-c","exec 'df' '-k' '-P' '" + dirPath + "' 2>/dev/null"};
    

    그래서 기본 Solaris에서 '안양'바이너리 -P 인수를하지 않습니다. 따라서 당신은 작동하도록 "는 / usr / XPG4 / 빈 / DF"를 사용해야합니다.

  4. from https://stackoverflow.com/questions/34126781/exitcodeexception-while-staring-namenode by cc-by-sa and MIT license