HDFS的DATANODE的剩余空间具体要到多大?关于这个问题,下面记录下对这个问题的调查 昨天,讨论群里面给出了一个异常:
Java代码 op@odbtest bin]$ hadoop fs -put ../tmp/file3 /user/hadoop/in2 14/01/15 02:14:09 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/in2/file3._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)该异常的只在NN的日志中抛出,而DN中没有相关内容,这说明,这是在NN进行block分配的时候做了检查。 这种情况一般都是DATANODE 变成dead,或者是datanode的磁盘容量不够了。 所以建议问题提出者,给DN的datadir空出一部分空间之后,操作正常 但是,该问题的提出者,给出report 数据:
Java代码 [hadoop@odbtest bin]$ hdfs dfsadmin -report Configured Capacity: 8210259968 (7.65 GB) Present Capacity: 599728128 (571.95 MB) DFS Remaining: 599703552 (571.92 MB) DFS Used: 24576 (24 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Datanodes available: 1 (1 total, 0 dead) Live datanodes: Name: 192.168.136.128:50010 (odbtest) Hostname: odbtest Decommission Status : Normal Configured Capacity: 8210259968 (7.65 GB) DFS Used: 24576 (24 KB) Non DFS Used: 7610531840 (7.09 GB) DFS Remaining: 599703552 (571.92 MB) DFS Used%: 0.00% DFS Remaining%: 7.30% Last contact: Tue Jan 14 23:47:26 PST 2014按照report的数据DFS还剩下(571.92 MB)的大小,应该是可以创建的,但是抛出了这个异常,肯定是对DATANODE的剩余最小容量做了限制。查了一下HADOOP 2.2.0的源码, org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault的方法isGoodTarget中,会对DATANODE的剩余容量进行判断:
Java代码 long remaining = node.getRemaining() - (node.getBlocksScheduled() * blockSize); // check the remaining capacity of the target machine if (blockSize* HdfsConstants.MIN_BLOCKS_FOR_WRITE>remaining) { if(LOG.isDebugEnabled()) { threadLocalBuilder.get().append(node.toString()).append(": ") .append("Node ").append(NodeBase.getPath(node)) .append(" is not chosen because the node does not have enough space "); } return false; }代码中说了,当剩余容量小于blockSize* HdfsConstants.MIN_BLOCKS_FOR_WRITE的时候,会返回false,而默认情况下 blockSize* HdfsConstants.MIN_BLOCKS_FOR_WRITE=128M*5=640M> 571.92 MB,这就解释了这个异常发生的原因。
转载于:https://www.cnblogs.com/sha0830/p/5060600.html
相关资源:数据结构—成绩单生成器