当我将它加载到HDFS中时,我需要设置一个文件的块大小,使其小于簇大小。 例如,如果HDFS使用64mb的块,我可能想要一个大的文件被复制到32mb的块。
我之前在Hadoop工作负载中使用org.apache.hadoop.fs.FileSystem.create()函数完成了这个工作,但是有没有办法从命令行执行?
你可以通过设置-Ddfs.block.size = hadoop fs命令来实现。 例如:
hadoop fs -Ddfs.block.size=1048576 -put ganglia-3.2.0-1.src.rpm /home/hcoyote
正如你在这里可以看到的,块的大小会改变到你在命令行上定义的大小(在我的情况下,默认值是64MB,但是我在这里将它改为1MB)。
:; hadoop fsck -blocks -files -locations /home/hcoyote/ganglia-3.2.0-1.src.rpm FSCK started by hcoyote from /10.1.1.111 for path /home/hcoyote/ganglia-3.2.0-1.src.rpm at Mon Aug 15 14:34:14 CDT 2011 /home/hcoyote/ganglia-3.2.0-1.src.rpm 1376561 bytes, 2 block(s): OK 0. blk_5365260307246279706_901858 len=1048576 repl=3 [10.1.1.115:50010, 10.1.1.105:50010, 10.1.1.119:50010] 1. blk_-6347324528974215118_901858 len=327985 repl=3 [10.1.1.106:50010, 10.1.1.105:50010, 10.1.1.104:50010] Status: HEALTHY Total size: 1376561 B Total dirs: 0 Total files: 1 Total blocks (validated): 2 (avg. block size 688280 B) Minimally replicated blocks: 2 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 0 (0.0 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 3 Average block replication: 3.0 Corrupt blocks: 0 Missing replicas: 0 (0.0 %) Number of data-nodes: 12 Number of racks: 1 FSCK ended at Mon Aug 15 14:34:14 CDT 2011 in 0 milliseconds The filesystem under path '/home/hcoyote/ganglia-3.2.0-1.src.rpm' is HEALTHY
注意HADOOP 0.21在0.21这里有一个问题,你必须使用-D dfs.blocksize而不是-D dfs.block.size