查看日志报错如下
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828)
at java.lang.Thread.run(Thread.java:745)
2017-05-01 10:34:13,944 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to host103/192.168.1.103:9000
2017-05-01 10:34:14,045 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2017-05-01 10:34:16,045 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-05-01 10:34:16,047 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-05-01 10:34:16,049 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
从日志中可以看出,原因是因为datanode的clusterID 和 namenode的clusterID 不匹配
打开hdfs-site.xml里配置的datanode和namenode对应的目录,分别打开current文件夹里的VERSION,可以看到clusterID项正如日志里记录的一样,确实不一致,修改datanode里VERSION文件的clusterID 与namenode里的一致,再重新启动dfs(执行start-dfs.sh)或者单独启动
/hadoop-2.6.0/sbin/hadoop-daemon.sh start datanode
/hadoop-2.6.0/sbin/yarn-daemon.sh start nodemanager
再执行jps命令可以看到datanode已正常启动。
转载请注明:LAOV博客 » hadoop中DataNode节点无法启动