Hadoop cluster (HDFS)uses blocks to store the data given by the clients, these blocks are called HDFS BLOCK, and the size varies from 64MB and above, generally there block sizes are very large when compared to traditional FS like FAT, NTFS, EXT3, EXT4..etc. Because hadoop handles large volumes of data in TB's. block size like 4k, 8k are not at all sufficient to store that data. In traditional FS we can have small amount of data in MB or GB. But HDFS internally uses traditional FS blocks to store HDFS BLOCK data in cluster(datanode).
answered By  0
In HDFS the memory will be stored in form of blocks, the default block size is 64mb we can change it in conf file and that to in form of multiples of 64 only (i.e; 128,256,512)
Coming to traditional file system memory will be allocated in the form of kb (the minimum human readable is 1kb)