Archive | Columnar Database RSS feed for this section

Example of a Big Data Refinery with Pentaho Analytics and HP Vertica

27 Mar

When you look at building an enterprise Big Data Analytics architecture, the direction in which you lead in terms of design and technology choices should be driven top-down from business user requirements. The old axioms of BI & DW projects of the bad old days in the data warehouse world still hold true with today’s modern data architectures: your analytics solutions will only be a success if the business uses your solution to make better decisions.

As you piece together a pilot project, you will begin to see patterns emerge in the way that you collect, manage, transform and present the data for consumption. Forrester did a nice job of classifying these patterns in this paper called “Patterns in Big Data“. For the purposes of a short, simple blog post, I am going to focus on 1 pattern here: “Big Data Refinery” using a one of our Pentaho technology partners, HP Vertica, an MPP analytical database engine with columnar storage.

Two reasons for starting with that use case. First reason: the Forrester paper kindly references the product that I worked on as Technology Director for Razorfish called Fluent. You can read about it more at the Forrester link above or read one of my Slideshares on it here. Secondly, at the Big Data Techcon conferenence on April 1, 2014 in Boston, Pentaho will present demos and focus on this architecture with HP Vertica. So, seems like a good time to focus on Big Data Refineries as a Big Data Analytics data pattern for now.

Here is how Forrester describes Big Data Refinery:

The distributed hub is used as a data staging and extreme-scale data transformation platform, but long-term persistence and analytics is performed by a BI DMBS using SQL analytics

What this means is that you are going to use Hadoop as a landing zone for data and transformations, aggregations and data treatment while utilizing purpose-built platforms like Vertica for distributed schemas and marts with OLAP business analytics using a tool like Pentaho Analytics. The movement of data and transformations throughout this platform will need to be orchestrated with an enterprise-ready data integration like Pentaho Data Integration (Kettle) and because we are presenting analytics to the end user, the analytics tools must support scalable data marts with MDX OLAP capabilities.

This reference architecture can be built using Pentaho, HP Vertica and a Hadoop distribution like this one below. This is just an example of Pentaho Business Analytics working with HP Vertica to solve this particular pattern, but can be architected with a number of different MPP & SMP databases or Hadoop distributions as well.

refinery

 

PDI Kettle provides data orchestration at all layers in this architecture included visual MapReduce in-cluster at the granular Hadoop data layer as well as ETL with purpose-built bulk loaders for Vertica. Pentaho Analysis Services (Mondrian) provides the MDX interface and end-user reporting tools like Pentaho Analyzer and Pentaho Report Designer are the business decision tools in this stack.

So if you were to pilot this architecture using the HP Vertica VMart sample star schema data set, you would auto-model a semantic model using Pentaho’s Web-based Analytics tools to get base model like this using VMart Warehouse, Call Center and Sales marts:

vmart4

Then open that model in Pentaho Schema Workbench to augment and customize it with additional hierarchies, customer calculations, security roles, etc.:

vmart2

From there, you can build dashboards using this published model and present analytical sales report to your business from the VMart data warehouse in Vertica like this:

vmart3

 

 

Much of this is classic Business Intelligence solution architecture. The takeaway I’d like you to have for Big Data Refinery is that you are focusing your efforts on providing a Big Data Analtytics strategy for your business that can refine granular data points stored in Hadoop into manageable, refined data marts through the power of a distributed MPP analytical engine like HP Vertica. An extension of this concept would enable secondary connections from the OLAP model or the end-user reporting tool to connect directly to the detail data stored in Hadoop through an interface like Hive to drill down into detail stored in-cluster.

Advertisements

The Role of “Big Databases” in Big Data

1 Oct

Big Data requires a Big Database right?

First, let me explain what I mean by a “big database”. I’m referring to products like a data warehouse appliance such as Oracle’s Exadata, Microsoft’s Parallel Data Warehouse or Teradata.

But then there are also “NoSQL” databases that store key/value pairs or JSON document objects like MongoDB, Cassandra and DynamoDB.

And then there are also column-oriented databases like Vertica or MPP style like Aster Data and Netezza.

In the world of Big Data Analytics, you must serve your clients with extremely large, fine-grained data sets that can be quickly & easily traversed, queried, loaded and archived.

In practice, classic database configurations of shared SAN storage and SMP servers does not scale well to this degree of scalability requirement. NoSQL databases are not always feasible because you may want to create, store and archive data at all grains and aggregations as well as creating in-database analytics.

That leaves data warehouse appliances, column-oriented and MPP as the best targets for these data patterns. One more note first: you could perform aggregations and some analytics during data parsing and loading with tools like MapReduce. But I’ll go into that detail in another posting.

What I am finding is that many of the business leaders and decision-makers in organizations that are currently looking to Big Data solutions for their business do not want to put a lot of resources and investment into traditional RDBMS configurations that require a large amount of care, feeding and maintenance. You will still have plenty of knobs to turn, indexes to tune and other settings to tweak with Oracle & SQL Server.

In the big data analytics world, then, Massively Parallel Processing (MPP) databases are very popular. It’s an easier image for a business decision maker to visualize in their head when a database can be pictured as partitioned across worker nodes that can be load-balanced and extended by adding more capacity.

Whether that is the best fit for you or not takes a lot of analysis and examination of all of those data store options. I would say to even be leery of the database vendors over-selling the MPP option unless you also fully have accounted for the additional complexities involved in managing a fully distributed, sharded database.

Br, Mark

Did Big Data Kill OLAP Cubes?

19 Sep

Did Big Data Kill OLAP Cubes? Not yet, but very possibly soon.

Think about the traditional usage and purpose of OLAP cubes in terms of their predominate deployment today. In most cases, enterprises are using cubes to aggregate data and pre-process data from multiple data source and/or a data warehouse to provide BI capabilities.

Many of these use cases are based upon data processing cycles that occur daily with large sets of data in bulk fashion. Well, that sounds quite a bit like Big Data requirements of processing large data sets in bulk fashion and then providing access to that post-processed data to analysts, scientists, etc.

So there is still clearly a correlation and applicability of OLAP cubes in the Big Data world.

OLAP cubes provide value in a number of ways, including abstracting report queries away from the database and providing fast access to knowledge through techniques that include pre-aggregated, pre-built analytics in the cube. This is where we start to breakdown in terms of the future of OLAP cubes in Big Data use cases.

In Big Data use cases, we need to provide much more ad-hoc, data exploration and knowledge self-discovery. This makes building the analytics in the cube based on requirements and assumptions very difficult. Even in the most “Agile” BI shops, this is a challenge.

This is where in-memory technologies, MPP and columnar databases become key enablers in the BI stack for Big Data. I’m writing a few new posts for SQL Server Pro mag and MSSQLDUDE that I’ll link to here to explain this in more technical terms over the next few days. Back here in Big Data Analytics, I’ll talk about generic MPP techniques.

For now, be prepared to hear the BI and database industry talk about maximizing in-memory cubes & databases for BI & reporting purposes, replacing OLAP cubes.

This does NOT preclude the need for semantic modeling and abstraction layers. And OLAP cubes still play a very important role in specific use cases that do not require large sets of ad-hoc query requirements.

However, Big Data architects do need to think about solving the traditional BI problems in a different way.

cbailiss

Microsoft SQL/BI and other bits and pieces

TIME

Current & Breaking News | National & World Updates

Tech Ramblings

My Thoughts on Software

SQL Authority with Pinal Dave

SQL Server Performance Tuning Expert

Insight Extractor - Blog

Paras Doshi's Blog on Analytics, Data Science & Business Intelligence.

The SQL Herald

Databases et al...

Chris Webb's BI Blog

Microsoft Analysis Services, MDX, DAX, Power Pivot, Power Query and Power BI

Bill on BI

Info about Business Analytics and Pentaho

Big Data Analytics

Occasional observations from a vet of many database, Big Data and BI battles

Blog Home for MSSQLDUDE

The life of a data geek