Mean On Azure Episode 1

I just published my first episode in a series called Mean On Azure. In it I take you through my journey of discover in the Mean Stack in bite size videos showing you how to not only use the MEAN stack, but how to use it on Azure.

To view the whole series on Channel 9 go to http://sogeek.us/meanonazure

Big Data, how big is big

Size

When it comes to Big Data, there is no way to get away from the discussion on the Size of data. It is one of the three V’s (Volume)

“Big data” is high-volume, -velocity and -variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making. ~Gartner

 

The interesting part about the Volume of data is not just the sheer about of it that we generate on a daily basis but the fact that due to the lowering cost of storage, we are able to actually keep a good portion of it around.  This in turn gets us talking about size.  This is where people try to dazzle you with their intellect.  For example, you may have heard of an Exabyte.  There may be a possibility that you know that and Exobye is

1 EB = 1018 bytes (10 with 18 zeros behind it)

1000000000000000000B

But seriously even if you are a mathematician, the complexity derives from the fact that most people cannot visualize the size of data in that format.

Is that important?   You bet it is.  One of the key attributes of a Data Scientist is to be able to adequately convey the complex to non-technical people.  To bridge the gap between those with the tools and those with the power (or budget) in an organization.  This may be a small example but it is a good one, breaking down how big data is in terms people can understand is important.

grandmaSo all of that being said, I like to describe it in a way that even my Grandma can understand.  I just relate everything to a GB.  From GB harddrives, to GB in your phone, to GB in your camera, most people can wrap their head around the size because they are used to doing mental calculations based on how many songs, how many files, how many pictures they can hold on x number of GBs.

OK… So lets start talking about big numbers now.

Terabyte = 1024 GB.

With the low cost of storage now, many of you may have Terabyte drives at home or in your computer.  I have a 2TB drive that I carry around with me with EVERYTHING I need on it.  So this one is not much of a comprehension challenge.

Petabyte = Over 1 Million Gigabytes

Ok, so one step up. Over One Million Gigabytes.  What does that get us.  Well, the size of the 10 Billion photos on Facebook is 1.5 Petabytes.  Or the amount of Data processed by Google…. Per day is 20 Petabytes.

Exabyte = Over 1 Billion Gigabytes

When we get into Exabytes, we start talking about how much storage entire Data Center Building hold.  Like Utah Data Center for the Comprehensive National Cyber security initiative (CNCI).  Or cold storage for those Facebook pictures we mentioned.

Zettabyte = over a Trillion Gigabytes

It is said that over 2 ZB of data is created every day.  We don’t store this much (we don’t have the ability) but this much is created. The “Zetta” was recognized by the 19th International Committee for Weights and Measueres in 1991 (along with “Yetta”) .  So where is all of this data coming from?  See the infographic below.

DatainOneMinute

infographic by Domo.com

infographic by Domo.com

 

 

Yottabyte = over 5 thousand trillion Gigabytes

Of course if you are the NSA, you need more storage.  Like the NSA’s secret (ha ha) Data Center in Utah. http://www.foxnews.com/tech/2013/06/13/what-we3-know-utah-nsa-mega-data-warehouse/

Imagine being able to comb through that data.   A Yottabyte is so big that you have to start talking about small things to put in in comparison.  For example there are 1 Yottabytes of Atoms in 7000 human bodies.  Or A Yottabyte of 1TB hard drives would require a data center covering 1 million city blocks

 

 

Big Data — What is HBASE

hbaseThe great thing about Big Data technology is that there are so many tools in the Data Scientists Tool Belt. The bad thing about Big Data Technology is that there are so many tools in the Data Scientists Tool belt.

When we talk about the tools that we use when working with Big Data, an overwhelming majority will discuss Hadoop, the Apache foundations implementation of Map Reduce and Distributed File Systems (HDFS in this instance. Which was created by Doug Cutting after Reading papers on the subject produced by Google Engineers while he was at Yahoo. (He is now at Cloudera). But big data tools rarely if ever work alone. It is a collection of tools and databases that help Data Scientists be more effective in their analysis (or just help to speed things up).

One of these technologies is HBase. HBase is a non-relational (NoSQL) database that is a Java implementation of Google Big Table. It is what is referred to as a Columnar Database. As oppose to Relational Database which stores its Data in Rows, it stores its data in Columns.

So that’s easy to say, but what exactly does that mean. Lets start with the definition Google lays out in its document on Big Table. http://static.googleusercontent.com/media/research.google.com/en/us/archive/bigtable-osdi06.pdf

” A Bigtable is a sparse, distributed, persistent multidimensional sorted map.”

Lets break that down to see what that means.

Sparse

A database is said to be sparse because of lack of data but not in the traditional sense of the term which would usually mean that there are very few items in the database.  In relation to HBASE, it is called sparse because of its ability to have sparse data in its entities.  What this means is as opposed to a relational table that would  require you to fill out all (or most) of the fields (Think of a customer data table in a relational data base), a columnar database can be empty or NULL without it adversely affecting the database functionality.  In addition, this also gives you the added benefit of being able to add other pieces of data you would like to capture on the fly.  In a Relational Database, you create a schema (FirstName, LastName, SS#, TelephoneNumber) and hope that you have thought of all the data you need to capture at the time of creation.  The NoSql schema-less databases allow you to add fields when needed or discovered without interrupting the normal flow of the operations.

Distributed and Persistent

HBase utilizes HDFS (the Hadoop Distributed File System) to distribute data across several commodity servers.  This is how Hadoop and and HBase are able to work with vast amounts of data. It is based on another article from Google on the Google File System which Doug Cutting used as a basis for HDFS http://static.googleusercontent.com/media/research.google.com/en/us/archive/gfs-sosp2003.pdf .  We will discuss HDFS and DFS in more detail in another post.

Multidimensional sorted Map

A Map (sometimes called and associative array) is a collection where the index of what is being stored does not have to be an integer but can also be arbitrary string.  It is a collection of Key/Value pairs where the key is unique.  The Keys are sorted in lexicographical order. (Not alphabetical, nor Alphanumeric, but sorting on the Unicode value of the string)

What you gain/give Up

Using HBase allows you to store your data both preprocessing and post processing in HBase and gives you some greater flexability and host billions of rows of data with rapid access. The downside is that when you use HBase instead of HDFS, tools like Hive (SQL like retrieval of data) are 4-5 times slower than plain HDFS. In addition, the maximum amount of data you can hold is about 1 petabyte as opposed to 30pb in HDFS.

 In the next post will dive deeper into the specifics of HBase, like set up, usage, and analysis.