This post will get you started with Hadoop, HDFS, Hive and Spark, fast.
What is Spark?
Apache Spark is a fast and general purpose engine for large-scale data processing. You can write code in Scala or Python and it will automagically parallelize itself on top of Hadoop. It basically runs map/reduce.
What is Hadoop and HDFS?
Hadoop is a software library, which is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It’s really a common library called Hadoop Common and a framework called Hadoop MapReduce that sits on top of a distributed file system, called HDFS.
What is Yarn?
The Hadoop Distributed File System or HDFS is a way to distribute file system data to a bunch of workers. The distribution, job scheduling and cluster resource management is done by a system called Yarn.
What is Hive?
Hadoop alone doesn’t know much about data structure and deals with text files. Most humans work with SQL, so the Apache Hive data warehouse software facilitates reading, writing, and managing large datasets residing in distributed HDFS storage using SQL. It lets you create and query a SQL schema on top of text files, which can be in various formats, including the usual CSV or JSON.
A good place to start is to run a few things locally.
On OSX run brew install hadoop, then configure it (This post was helpful.) I turned the configuration into a script in my dotfiles. Once installed you can run hstart. Once working, you can navigate to a local resource manager on http://localhost:50070 or the job tracker on http://localhost:8088 and run a small test.
Once you’ve installed Hadoop, install Hive. On OSX run brew install hive, the configure it. I turned the configuration into a script in my dofiles as well. The biggest difficulty is that you need to initialize a metastore where Hive stores its configuration information with schematool -initSchema -dbType derby (or another dbType, such as mysql or postgres). In the case of Derby, the metastore_db is created in the same directory as from where you run the command, so it needs to be tied down to a location via hive-site.xml.
Once installed you can run hive and get a hive> prompt.
Before we do that, lets get some data into HDFS.
Load data into Hive. If you have existing data files you can just use those and add a schema on top of them with CREATE EXTERNAL TABLE.
Neither Hadoop or Hive are prerequisites to run Spark on OSX, install it with brew install apache-spark. From your installation in /usr/local/Cellar/apache-spark/X.Y.Z run ./bin/run-example SparkPi 10 from there. You should see Pi is roughly 3.1413551413551413.