Big Data Training Chennai
Learn the basics of Hadoop. You will learn about the Hadoop architecture, HDFS, MapReduce, Pig, Hive, JAQL, Flume, and many other related Hadoop technologies. Practice with hands-on labs on a Hadoop cluster using our Cloud Server Access.
Hadoop is an open source framework for processing, storing and analyzing huge amounts of unstructured data. The fundamental concept is to break Big Data into multiple smaller data sets, so each data set can be processed and analyzed in parallel. Hadoop is best for large, but relatively simple database filtering, sorting, converting and analysis.
- Hadoop Distributed Filesystem (HDFS), which creates replicas of data blocks, and distributes data on computer nodes over the cluster.
- MapReduce – MapReduce divides jobs into two parts. The “Map” function divides a query into multiple jobs, and the “Reduce” function combines the results to form the output
- HBase is a Hadoop database that provides random, real-time read and write access to HDFS;
- Hive is an analysis tool: it uses a SQL like syntax to rapidly develop queries. Mostly used for offline batch processing, ad-hoc querying and statistical analysis of large data warehouse systems;
- Mahout is a framework for deploying many machine learning algorithms on large datasets, mostly used in clustering, classification and text mining.
- Pig is the platform that analyzes large data sets. The Pig structure is amenable to substantial parallelization, so as to effectively handle very large volumes of data sets. Pig uses a language called Pig Latin, and has the characteristics of easy programming, auto optimization and extensibility;
- OOzie is an open source workflow scheduler system to manage Apache Hadoop data processing jobs. Oozie workflow consists of actions and dependencies. Users create Directed Acyclical Graphs (DAG) to model workflow. Oozie manages the dependencies at runtime, and executes the actions when the dependencies identified in the DAG are satisfied. Yahoo!’s workflow engine uses OoZie to manage jobs running on Hadoop (Yahoo!, 2010);
- ZooKeeper is a centralized service, which enables highly reliable distributed coordination. It maintains configuration information, provides distributed synchronization and group services for distributed applications;
- Flume is a distributed system that brings data into HDFS. The Apache Flume website describes Flume as “a distributed, reliable and available service for efficiently collecting, aggregating and moving large amounts of log data. It enables applications to collect data from its origin and send it to the HDFS;”
- HCatalog provides table management and storage management for data created using Hadoop. HCatalog provides a shared schema and data type mechanism, can interoperate across data processing tools such as Pig, Hive and MapReduce.
BigTop is a project for packaging and testing the Hadoop ecosystem. It puts 100% open source apache Hadoop big data stack together, including Hadoop, Hbase, Hive, Mahout, flume and etc. This full stack of components provide the user a complete data collection and analytics pipeline (Apache Incubator PMC).