Home > Software > BIGDATA > HADOOP
Interview Questions   Tutorials   Discussions   Programs   Videos   Discussion   

HADOOP - What is HDFS Block size? How is it different from traditional file system block size?




1109
views
asked SRVMTrainings August 8, 2013 09:47 AM  

What is HDFS Block size? How is it different from traditional file system block size?


           

3 Answers



 
answered By Nagarjuna   0  
       Hadoop cluster (HDFS)uses blocks to store the data given by the clients, these blocks are called HDFS BLOCK, and the size varies from 64MB and above,  generally there block sizes are very large when compared to traditional FS like FAT, NTFS, EXT3, EXT4..etc.  Because hadoop handles large volumes of data in TB's. block size like 4k, 8k are not at all sufficient to store that data. In traditional FS we can have small amount of data in MB or GB. But HDFS internally uses traditional FS blocks to store  HDFS BLOCK data in cluster(datanode).
flag   
   add comment

 
answered By   0  
In HDFS the memory will be stored in form of blocks, the default block size is 64mb we can change it in conf file and that to in form of multiples of 64 only (i.e; 128,256,512)

Coming to traditional file system memory will be allocated in the form of kb (the minimum human readable is 1kb)
flag   
   add comment

 
answered By chalana   0  
HDFS block size is (64M or 128M)  traditional file systems block size not so high  are only order of a few (4k to 8k)
flag   
   add comment

Your answer

Join with account you already have

FF

Preview


Ready to start your tutorial with us? That's great! Send us an email and we will get back to you as soon as possible!

Alert