SSAS Tabular using HiveODBC connection

1_Hive-to-BISM

In my previous blog post “Import Hadoop Data into BI Semantic Model Tabular”, I mentioned that you need a SQL Server Linked Server connection to connect SSAS Tabular to a Hive table in Hadoop. That is the case with SSAS Multidimensional instance but in a Tabular instance you can connect directly to Hive table. Thanks to Lara Rubbelke (Technical Architect at Microsoft) who brought to my attention that we can connect a SSAS Tabular project to Hive via HiveODBC connection directly.

After few testing scenarios, we were able to get it to work. Here is how? Running SQL Server Analysis Services 2012 Tabular mode 64-bit on a 64-bit operating system, after creating an SSAS Tabular project using SQL Server Data Tools (SSDT), you have to create both a 32-bit and 64-bit System DSN for this to work. When you create the SSAS import task, you are doing it from SQL Server Data Tools (SSDT), which is a 32-bit process, so it can only see 32-bit DSNs. On a 64-bit operating system, the DSNs are 64-bit by default, so they don’t show up. User DSNs are automatically both 32 and 64bit, so they don’t have this problem. This is a quirk of ODBC and isn’t specific to this driver.

To create a 32-bit DSN, run c:\windows\syswow64\odbcad32.exe and create the System DSN there, and also create an identical one (same name and everything) in the regular 64-bit ODBC Data Source Administrator that is launched from the control panel. When you create the import task from SSDT, it will pick the 32-bit one and then at runtime when the import happens, it will look for the 64-bit one and use it instead. As long as they are identical this is fine.

 

Project “ChâteauKebob”

Authors: Ayad Shammout & Denny Lee

Kebob

It may sound like a rather odd name for an End-to-End Auditing Compliance project – and the roots admittedly enough are based on the authors’ prediliction toward great food in the city of Montréal – but there actually is an analogous association!

Château means manor house or palace and kebob refers to meat that is cooked over or next to flames; large or small cuts of meat, or even ground meat, it may be served on plates, or in sandwiches (mouth watering yet).  Château Kebob means house of kebob with different meat.

So why did we call our project “ChâteauKebob”? In this project Denny Lee and I used a château of different technologies in one framework or house with Kebob of mixed of data small or big data, structured and unstructured, and served on plates or in sandwiches, … sorry I meant served with multiple BI tools for reporting and analysis.

The purpose of this project is to provide a set of tools and jumpstart with scripts to implement the project involving HDInsight (Hadoop), SQL Server 2012, SQL Server Analysis Services 2012, Excel 2013 PowerPivot and Power View. You can review an overview of the project at slideshare .

The SDK is available at GitHub to download the entire project.


Import Hadoop Data into Analysis Services Tabular

 1_Hive-to-BISM

Hadoop brings scale and flexibility that don’t exist in the traditional data warehouse. Using Hive as a data warehouse for Hadoop to facilitate easy data summarization, ad-hoc queries, and the analysis of large datasets. Although Hive supports ad-hoc queries for Hadoop through HiveQL, query performance is often prohibitive for even the most common BI scenarios.

A better solution is to bring relevant Hadoop data into SQL Server Analysis Services Tabular model by using HiveQL. Analysis Services can then serve up the data for ad-hoc analysis and reporting. But, there is no direct way to connect an Analysis Services Tabular database to Hadoop. A common workaround is to create a Linked Server in a SQL Server instance using HiveODBC which uses it through OLE DB for ODBC. The HiveODBC driver can be downloaded from here.

Create a Hive ODBC Data Source

The following steps show you how to create a Hive ODBC Data Source.

  1. Click Start -> Control Panel to launch the Control Panel for Microsoft Windows.
  2. In the Control Panel, click System and Security->Administrative Tools. Then click Data Sources. This will launch the ODBC Data Source Administrator dialog.

2_HiveODBC

  1. In the ODBC Data Source Administrator dialog, click the System DSN tab.
  2. Click Add to add a new data source.
  3. Click the HIVE driver in the ODBC driver list.

3_HiveODBC

  1. Click the Finish button. This will launch the Hive Data Source Configuration dialog.

4_HiveODBC

  1. Enter a data source a name in the Data Source Name box. In this example, SQLHive.
  2. In this example, we are connecting to HDInsight (Hadoop on Windows Azure). In the Host box, replace the clustername placeholder variable with the actual name of the cluster that you created. For example, if your cluster name is “HDCluster1” then the final value for host should be “HDCluster1.azurehdinsight.net”. Do not change the default port number of 563 or the default value of the Hive Server HTTP Path, /servlets/thrifths2. If you are connecting to Hadoop cluster, the port number would be 10000.
  3. Click OK to close the ODBC Hive Setup dialog.

Once the HiveODBC driver is installed and created, next you will create a SQL Server Linked Server connection for HiveODBC.

SQL Server can serve as an intermediary and Analysis Server can connect to Hadoop via Hive Linked Server connection in SQL Server, so Hive appears as an OLE DB-based data source to Analysis Services.

The following components need to be configured to establish connectivity between a relational SQL Server instance and the Hadoop/Hive table:

  • A system data source name (DSN) “SQLHive” for the Hive ODBC connection that we created in the steps above.
  • A linked server object. The Transact-SQL script illustrates how to create a linked server that points to a Hive data source via MSDASQL. The system DSN in this example is called “SQLHive”.

                EXEC master.dbo.sp_addlinkedserver
                               @server = N’SQLHive’, @srvproduct=N’HIVE’,
                               @provider=N’MSDASQL’, @datasrc=N’SQLHive’,
                               @provstr=N’Provider=MSDASQL.1;Persist Security Info=True;
                               User ID=UserName; Password=pa$$word;

             Note: Replace the User ID “UserName” and password “pa$$word” with a valid
username and password to connect to Hadoop.

  • SQL statement that is based on an OpenQuery Transact-SQL command. The OpenQuery command connects to the data source, runs the query on the target system, and returns the ResultSet to SQL Server. The following Transact-SQL script illustrates how to query a Hive table from SQL Server:

                             SELECT * FROM OpenQuery(SQLHive, ‘SELECT * FROM HiveTable;’)

Where “HiveTable” is the name of Hadoop Hive table, you can replace the name with the actual Hive table name.

Once the Linked Server is created on the computer running SQL Server, it is straightforward to connect Analysis Services to Hive in SQL Server Data Tools. You can start by creating a new SQL Analysis Services Tabular project

Create a BI Semantic Model Tabular project and connect to a Hadoop Hive table

The steps below describe the way to import data from a hive table into new SSAS Tabular model using the Linked Server connection that you created in the steps above.

To create a new tabular model project

  1. In SQL Server Data Tools, on the File menu, click New, and then click Project.
  2. In the New Project dialog box, under Installed Templates, click Business Intelligence, then click Analysis Services, and then click Analysis Services Tabular Project.
  3. In Name, type Hive Tabular Model, then specify a location for the project files. By default, Solution Name will be the same as the project name, however, you can type a different solution name.
  4. Click OK.
  5. In SQL Server Data Tools, click on the Model menu, and then click Import from Data Source. This launches the Table Import Wizard which guides you through setting up a connection to a data source.
  6. In the Table Import Wizard, under Relational Databases, click Microsoft SQL Server, and then click Next.
  7. In the Connect to a Microsoft SQL Server Database page, in Friendly Connection Name, type SQLHive DB from SQL.
  8. In Server name, type the name of the SQL Server database that hosts the SQL Linked Server connection to Hadoop/Hive.
  9. In the Database name field, click the down arrow and select master, and then click Next.
  10. In the Impersonation Information page, you need to specify the credentials Analysis Services will use to connect to the data source when importing and processing data. Verify Specific Windows user name and password is selected, and then in User Name and Password, enter your Windows logon credentials, and then click Next.
  11. In the Choose How to Import the Data page, verify write a query that will specify the data to import is selected. Rename the query name to friendly name  and in the SQL Statement window, type the following:                                                                                                     SELECT * FROM OpenQuery (SQLHive, ‘SELECT * FROM HiveTable;’)
  12. And then click Finish.
  13. Once the above table was imported, you can import additional dimensions and you can create a relationships between the tables.
  14. Now the Model is ready to be deployed to SQL Server Analysis Services (SSAS) Tabular instance.

The Hive ODBC driver makes it easy to import data from your Hadoop Hive table into SQL Server Analysis Services Tabular instance database where Business Intelligence tools may be used to view and analyze the data.

Optimizing Joins running on HDInsight Hive on Azure at GFS

Denny Lee

.“…to look at the stars always makes me dream, as simply as I dream over the black dots of a map representing towns and villages…”
— Vincent Van Gogh

Image Source: Vincent Van Gogh Painting Tilt Shifted: http://coolvibe.com/2011/16-van-gogh-paintings-tilt-shifted/tilt-shift-van-gogh-15/

.

Introduction

To analyze hardware utilization within their data centers, Microsoft’s Online Services Division – Global Foundation Services (GFS) is working with Hadoop / Hive via HDInsight on Azure.  A common scenarios is to perform joins between the various tables of data.  This quick blog post provides a little context on how we managed take a query from >2h to <10min and the thinking behind it.

Background

The join is a three-column join between a large fact table (~1.2B rows/day) and a smaller dimension table (~300K rows).  The size of a single day of compressed source files is ~4.2GB; decompressed is ~120GB.  When performing a regular join (in Hive parlance “common…

View original post 1,003 more words

Healthcare Compliance with Big Data and BI

Healthcare Compliance with Big Data and BI

Over the past few years Denny Lee  (Technical Principal Program Manager within Microsoft’s SQL Business Intelligence Group) and I are always working on a very exciting SQL Server projects, earlier this month we presented “Big Data, BI, and Compliance in Healthcare” at PASS BA Conference in Chicago, IL.

Few years ago, we implemented “Centralized Audit Framework” to manage and view the audits of entire SQL Server environment that will parse, load, and report all of audit logs.

Expanding on the “Reaching Compliance: SQL Server 2008 Compliance Guide” to more easily handle larger volumes of structured and unstructured data and to gain richer and deeper insight using the latest analytics. To achieve this, we are building a Big Data-to-BI project involving HDInsight (Hadoop on Windows or Azure), SQL Server 2012, SQL Server Analysis Service 2012 Tabular instance, Integration Services, PowerPivot, and Power View.

The purpose of this SDK is to provide a set of tools and jumpstart with scripts to implement the Auditing project involving HDInsight, SQL Server 2012, PowerPivot and Power View.

Implementation Overview

The basic implementation of the Auditing and Reporting solution is shown in the figure below.

BigData_BI_diagram

Figure 2

The general flow of data in this solution is that Audits are created on any number of SQL Servers (2008 and 2012) in the environment and are set to log to the file system. The audit logs will be stored directly to a central network file share. A scheduled SQL Server Agent Job runs an SSIS package that reads the audit log files, combines them into large file sizes (250MB to 1GB file size) and uploads them to HDInsight Blob Storage which is the storage source for HDInsight on Azure or Windows.

Once audit logs stored in HDInsight Blob storage, we use Hive which is a data warehouse framework for Hadoop that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets.

Create a BI Semantic Data Model tabular to bring relevant Hadoop data into SQL Server Analysis Services Tabular by using HiveQL via SQL Server Linked Server connection to Hadoop Hive. Analysis Services can then serve up the data for ad-hoc analysis and reporting.

Reports are created with Excel 2013 using Power View that interacts with views of data from data models based on SSAS tabular model, or using Data Explorer to import audit data from Hive external table in HDInsight to allow compliance auditors and server administrators to assess the server compliance and trends in server compliance.

This information would then be fed back to the appropriate security, administrator and application development teams to enact policies to approve levels of compliance.

As the system evolves, teams may load additional application audit logs into Hadoop which could help tie these SQL Server specific activities to application and business activities.

The SDK will be available soon at GitHub to download the entire project. Stay tuned!