Home > Software > BIGDATA > HADOOP
Interview Questions   Tutorials   Discussions   Programs   Videos   Discussion   

HADOOP - HOW DO I LEARN BIG DATA ?




405
views
asked sulekha_roy255 April 27, 2016 08:17 AM  

This question is pending moderator approval

Hadoop is method of many data science projects. New technologies grow-up on finest of Hadoop are released all the time, and it can be difficult to keep up

Hadoop and big data certification training

 will be very helpful for passing the-professional Certification exam on Hadoop and advanced data analytics. with the wide array of tools at your-clearance, so here is a list some of the better needed

Apache Hadoop the official distribution.

Apache Ambari , a software package for managing Hadoop clusters

HDFC Reduce the basic framework for splitting data across a cluster underpinning Hadoop.

  • Set up Hadoop infrastructure with single and multi-node clusters using Amazon ec2 (CDH4)
  • Monitor a Hadoop cluster and execute routine administration procedures
  • Learn ETL connectivity with Hadoop big data, ETL tools, real-time case studies
  • Learn advanced big data technologies, write Hive and Apache Pig Scripts and work with Sqoop
  • Perform bigdata and analytics using Yarn
  • Schedule jobs through Oozie
  • Master Impala to work on real-time queries on Hadoop
  • Deal with Hadoop component failures and discoveries
  • Optimize Hadoop cluster for the best performance based on specific job requirements
  • Learn to work with complex, big data analytics tools in real-world applications and make use of Hadoop file System (like Google File System (GFS)
  • Derive insight into the field of Data Science and advanced data analytics
  • Gain insights into real-time processes happening in several big data companies
  • Work on a real-time project on Big Data Analytics and gain hands-on Big Data and Hadoop Project Experience

Apache H-base, a table-oriented database built on top of Hadoop.

Apache Hive a data warehouse built on top of Hadoop that makes data accessible through an SQL-like language.·

Apache sqoop , a tool for transferring data between Hadoop and other data stores.

Apache Pig, a platform for running code on data in Hadoop in parallel.

Zookeeper, a tool for configuring and synchronizing Hadoop clusters.

No SQL, a type of database that breaks from traditional relational database management systems using SQL. Popular No SQL databases include Cassandra, riak, and MongoDB.

Apache mahout a machine learning library designed to run on data stored in Hadoop.

Apache solar , a tool for indexing text data that integrates well with Hadoop.

  Apache avero, a data serialization system.

oozie, a work flow  manager for the Apache tool chain.

Gis tools, a set of tools to help manage geographical components of your data

Apache Flume, a system for collecting log data using HDFS.

SQL on Hadoop, some of the most popular options include: Apache Hive, Cloudera Impala, Presto(Facebook), Shark, Apache Drill, EMC/Pivotal HAWQ, Big SQL by IBM, Apache Phoenix (for H Base), Apache Tajo

Clouds, managed servers and services that remove the hassle of running your own infrastructure

Apache Spark a new way to run algorithms even faster on Hadoop data.

The whole concept of big data, or total data, and how to collect it and get it to the data lake can sound scary, but it becomes less so if you break down the data collection problem into subsets.


           

0 Answers


Be the first to answer this question

Join with account you already have

FF

Preview

 Write A Tutorials
Online-Classroom Classes
www.writeabc.com


  1 person following this question

  5 people following this tag

  Question tags

bigdata × 1
hadoop × 7

Asked 2 years and 7 months ago ago
Number of Views -405
Number of Answers -0
Last updated
2 years and 7 months ago ago

Ready to start your tutorial with us? That's great! Send us an email and we will get back to you as soon as possible!

Alert