Home > Software > BIGDATA > HADOOP
Interview Questions   Tutorials   Discussions   Programs   Videos   Discussion   

HADOOP - In order to read/write 64MB of hadoop data blocks, I guess HDFS has to communicate with the native file system(Correct me if I am wrong). How this communication happens..?? HDFS request to native FS is similar to a user level request for FS data blocks..?? HDFS combines the native FS block data(4k blocks) to 64MB blocks..??




1233
views
asked Raja May 17, 2014 11:17 PM  

In order to read/write 64MB of hadoop data blocks, I guess HDFS has to communicate with the native file system(Correct me if I am wrong). How this communication happens..?? HDFS request to native FS is similar to a user level request for FS data blocks..?? HDFS combines the native FS block data(4k blocks) to 64MB blocks..??


           

3 Answers



 
answered By   0  
HDFS does not communicate with linux native file system except for storing temporary logs.

Seventy percent of the disk space is allocated to HDFS. The remainder is reserved for the operating system (Red Hat Linux), logs, and space to spill the output of map tasks (MapReduce intermediate data are not stored in HDFS, they are stored on linux file systems).
flag   
   add comment

 
answered By   0  
Hi guys,


I am new to Big data technologies.

By using Mapreduce i have proccessed text files.

Now I need to proccess Zip files and xml files .

How Can I do that?

Thanks
flag   
   add comment

 
answered By   0  
The Data Which is stored in Hadoop , Can't be accessible by Linux file system
 
flag   
   add comment

Your answer

Join with account you already have

FF

Preview


Ready to start your tutorial with us? That's great! Send us an email and we will get back to you as soon as possible!

Alert