Tag Archives: analytics

Azure Big Data Analytics in the Cloud

3 Nov

Hi All … I’m BAAAACK! Now that I’ve settled into my new role in the Microsoft Azure field team as a Data Solution Architect, I’m getting back out on the speaker circuit. Here are my next 2 speaking engagements:

Tampa SQL BI Users Group

Global Big Data Conference Dec 9 Tampa

In each of those, I will be presenting Azure Big Data Analytics in the Cloud with Azure Data Platform overviews, demos and presentations.

I am uploading some of the demo content on my GitHub here

And the presentations on Slideshare here

 

Pentaho Native Analytics on MongoDB

15 Dec

Pentaho has a very rich and complete business analytics product suite. There is ETL, data integration, data orchestration, operational reporting, dashboards, BI developer tools, predictive analytics, OLAP analytics … and I’m probably missing a few others!

So when you are looking to implement a business intelligence and analytics solution for a Big Data platform using a modern technology outside of the traditional RDBMS sphere, like MongoDB NoSQL database, you have the advantage of a complete BI product set that works out-of-the-box to take advantage of that platform’s strengths.

What I mean by that is with Pentaho, there are different tools to optimize each aspect of a complete BI solutions. For instance, Pentaho Data Integration (PDI) has direct hooks into MongoDB using their API directly to manipulate and move data using MongoDB documents. The Pentaho Report Designer (PRD) also uses that same direct access mechanism to provide reporting for your business users directly on MongoDB.

With the Pentaho 5.1 BA Suite Release, interactive OLAP analytics using Pentaho Analyzer was introduced. This is Pentaho’s unique capability to translate business user queries using slice-and-dice MDX mechanisms directly into MongoDB AggPipeline queries.

With these capabilities, Pentaho does not require extracting and staging of MongoDB data from documents in collections into traditional RDBMS tables. Instead, analytics is turned into native MongoDB query syntax on the fly without any SQL requirements. And as I stated above, this allows the user to fully leverage and optimize your Big Data source, in this case MongoDB. Pentaho will push down queries into your MongoDB cluster, thereby not requiring you to establish an entirely separate analytics platform with its own hardware and scalability requirements.

Tips for Editing Pentaho Auto-Generated OLAP Models

1 Feb

If you’ve followed some of my tutorials earlier here or here where I’ve described the process of auto-generating OLAP models through the Pentaho auto-modeler, you will end up with a basic multidimensional star schema that allow you a basic level of customization such as here:

thinmodel

In most cases, that environment will provide enough control for you to create a model that will cover most of your analytical reporting needs. But if you want to build out a more complex model, you can manipulate the underlying Mondrian schema XML directly in a file or use the Pentaho Schema Workbench tool to build out snowflake schemas, custom calculations, Analyzer annotations, etc.

psw4

For direct XML editing of the multidimensional model, you can follow the Mondrian schema guide here.

To pull out the Mondrian model for editing from these Data Source Wizard sources, you can accomplish this by clicking the Export button on the Data Sources dialog box below:

dsw

If you use this method from the UI, you will download a ZIP file. Unzip that file and save the “schema.xml” inside the ZIP to your local file system. You can then edit that file in Schema Workbench (PSW) or in an XML editor and import your changes back into the platform from that same Manage Data Sources dialog in the Web UI, or just publish it directly to your server from PSW:

import

Here’s another tip that I like to do when I pull out a Mondrian schema from an auto-generated Data Source Wizard model that I think is easier than export a ZIP is to use the REST API call for extracting the XML schema directly. I downloaded curl on my Windows laptop to use as a command-line tool for calling Web Services APIs. Now I can make this REST call

curl –user Admin:password http://localhost:8080/pentaho/plugin/data-access/api/datasource/analysis/foodmart/download > foodmart.xml

To make the above call work in your environment, change the “–user” credentials to your username:password, replace the hostname with your server and then substitute “foodmart” for the name of your model that you  wish to modify. You can then edit that resulting file (foodmart.xml) in PSW or with an XML editor.

Don’t forget to import the updated file back into the platform or Publish it from Schema Workbench so that users will then be able to build their reports from the new schema.

One last trick that I do when I re-import or re-publish the edited model when I started from the generated Data Source Wizard model, is to rename the model in PSW or the XML file so that it will appear as a new model in the Pentaho tools. This way, you can avoid losing your new updates if you were to update the model in the thin modeler from Data Source Wizard again.

psw5

Getting Started with Mondrian For SSAS Developers

22 Oct

Over on my MSSQLDUDE blog, I showed you how the Pentaho BI Suite can auto-model and create OLAP analytics for you using the Mondrian ROLAP engine with SQL Server Adventure Works DW as the source database here and here. That is basically asking the Pentaho Enterprise Edition Suite (download an eval copy to get started here) to build the cube models for  you based on a series of automation and assumptions.

But if you, like me, have a bunch of SQL Server Analysis Server (SSAS) cubes that you want to move over to Pentaho or just use with the Mondrian ROLAP engine, it’s pretty straightforward to migrate those models over to Mondrian. You can then use the entire Pentaho Suite and visualizations or use Mondrian as an XMLA server, or even use Mondrian as a lightweight embedded MDX-based OLAP engine in your app. Take a look here at Pentaho’s Mondrian project home page for more info on the Mondrian ROLAP engine.

So, here I am going to walk you through the process of migrating over to the Pentaho Mondrian platform from SSAS … At least, the way that I did it! First here are a few differences in the engines to keep in mind as you migrate:

1. SSAS offers 3 storage options for your OLAP database: MOLAP, ROLAP & HOLAP. Mondrian is a pure-play relational OLAP or ROLAP engine, converting MDX queries from sources like XMLA client tools, to SQL queries against the source relational engine. In this example, I will keep the source RDMS database as-is in SQL Server and move my model over to Mondrian.

2. I am going to show you how to migrate just a base model, not the infrastructure of the server, which is fairly different from SSAS to Mondrian. This means we’ll ignore things like partitioning, pre-built aggregations structures, roles, KPIs and translations.

3. Mondrian has a very good aggregation capability and caching engine, similar to that of Microsoft SSAS. But since I do not migrate these in this approach, just take a look at this documentation on how to re-establish caching and aggregation tables in Mondrian here & here.  Keep in mind that Mondrian is pure ROLAP, so aggregation tables will live in the source database, not in a separate storage engine like the traditional SSAS MOLAP approach. I’ll save those more advanced topics for a later follow-up post.

As we migrate your SSAS cubes over to Mondrian land, make sure to note these conversions in terminology and options:

1. Data types

I convert the data types of measures from SSAS to Mondrian where you will find differences most notably in the numeric types that you will need to switch from the different SSAS number types to the “numeric” type in Mondrian.

2. Level types

Mondrian organizes dimensions into hierarchies with levels that represent the underlying dimension table columns. In SSAS, these are essentially equivalent to Dimension Attributes. So I convert the SSAS cube dimension attributes to Mondrian levels and convert the dimension attribute types to Mondrian level types. In Mondrian, you will see either regular or a varying set of time types. When migrating from SSAS, you need to modify from the different date types and make a decision on how you would want to handle the currency and geography types from SSAS.

3. Aggregation types

SSAS has about 12 or so aggregation functions for measures. In Mondrian, you will need to boil those down to sum, count, min, max, avg, count or distinct count. What I did was to remodel or remove unused measures that are of type Last / First Child, LastNonEmpty and any dimensions in a M:M relationship.

Many of the rest of the attributes and properties in SSAS are natural fits directly into the Mondrian schema such as format strings, visibility, calculated members, name columns, hide member if, etc.

So those are the basics. The pieces of SSAS that I do not script or automatically convert are the MDX scripts where you define named sets, calculations and scope inside of the SSAS cube. Those are MDX free-form scripts which you can rewrite in Mondrian as a Named Set or calculated measure. However, if you are using a lot of many-to-many or scope operators in your SSAS schema, you will need to find different ways of doing those in Mondrian 3.x (from the Pentaho 4.8 or 4.0 Suite) because they are not supported currently.

Now, if you have a fairly simple straight-forward cube in SSAS, you can script or automate a lot of this like I mentioned above, which can help a lot with the mundane rewriting of XML or using the Pentaho Schema Workbench to reconstitute and entire OLAP schema.

I wrote one in C# so that I can run it from the SSAS Windows server or from my laptop and it will perform all of those conversions and assumptions that I just listed including converting Measure Groups to Cubes in Mondrian and then generating Virtual Cubes to bring together measures from different fact tables. This is the familiar Virtual Cube concept that Mondrian 3.x utilizes. Again, you can do this all by hand or use a tool like this one.

Here is my beginning SSAS 2012 MOLAP (not Tabular) schema against Adventure Works DW 2012:

mon1

I’m just gonna build, deploy and browse that cube by slicing the Unit Cost by my Calendar Hierarchy:

mon2

Next, I’ll set the base schema in Mondrian by using my SSAS2MONDRIAN command-line converter tool:

mon7

All that the switches are doing on the command line are setting the source database & cube, telling the converter to just skip M:M relationships, including the SQL Server alias name in the XML output and then redirect the output to a Mondrian XML schema file. It will collect and iterate through all of the measure groups and dimensions, converting them to Mondrian, converting the similar types to the Mondrian types, converting the measure expression syntax in SSAS to Mondrian Calculated Measures and building a virtual cube, which is named on the command line as well. There are a few other behind-the-scenes migrations occurring, all of which you can hand-crank if you’d prefer. This is definitely a time saver and also helps to avoid human error of hand-editing XML.

mon5

Alright, so the migration is done, I’ve opened the output XML file in the Pentaho Schema Workbench (PSW) to & validate the Mondrian file and it looks good, no red X marks. I like to test my models with the MDX query editor, which also performs an additional level of schema file validation. When you go to open the MDX query editor in PSW, look in your standard error output. If you use this utility, or any other method of hand-generating Mondrian schema XML, you may need to tweak it a little bit before it will validate.

mon6

The MDX editor is showing the results that I am looking for, so I feel good from my simple quick & dirty smoke test. I can now publish the schema to the Pentaho platform and have fun with my new live Mondrian cube running in Pentaho, converted as-is from SSAS … you will see the name of the cubes (measure groups) as well as the grouped virtual cube:

mon3

And there is my slicing & dicing in the Pentaho OLAP Analyzer tool against SQL Server from Mondrian, just as it was in SSAS:

mon4

BTW, if you would like a copy of that Windows command-line program to automate the base model conversion, just send me an email: mkromer at pentaho dot com or download a copy of the executable here. You don’t need to use that, although it does simplify quite a bit of the tedious migration tasks. But the Mondrian XML schema is very straight forward and easy to understand and the Pentaho Schema Workbench is a nice tool to work you through adding in the rest of the pieces.

Look for follow-ups on my blog here to continue using Mondrian, especially for SSAS users and for BI developers who are new to tools like Pentaho.

cbailiss

Microsoft SQL/BI and other bits and pieces

TIME

Current & Breaking News | National & World Updates

Tech Ramblings

My Thoughts on Software

SQL Authority with Pinal Dave

SQL Server Performance Tuning Expert

Insight Extractor - Blog

Paras Doshi's Blog on Analytics, Data Science & Business Intelligence.

The SQL Herald

Databases et al...

Chris Webb's BI Blog

Microsoft Analysis Services, MDX, DAX, Power Pivot, Power Query and Power BI

Bill on BI

Info about Business Analytics and Pentaho

Big Data Analytics

Occasional observations from a vet of many database, Big Data and BI battles

Blog Home for MSSQLDUDE

The life of a data geek