Archive

Archive for the ‘BI Platform’ Category

Migrating BEx-based Information Spaces to BusinessObjects 4.0

August 8th, 2013 3 comments

Sometimes in software it can feel that you take two steps forward and one step back.

One example of this is with BEx-based Explorer Information Spaces when migrating from XI 3.1 to BO 4.0. In 4.0, SAP removed the ability to create information spaces against the legacy universe (unv) and instead required everyone to use the new universe format (unx).

On the surface this lack of support for the legacy universe didn’t seem to really be a problem because BusinessObjects 4.0 provides a painless way to migrate universes from the legacy unv to unx format.

What I didn’t realize initially until a few months ago was that this WAS be a big problem.  A big problem for customers who were creating Information Spaces based on BEx queries and Infocubes.

I’d like to share with you my journey to understand what is really required to move my Bex-based Information Spaces to BusinessObjects v4.0.

Explorer, BEx and Unx

Explorer is NOT an OLAP-aware product, therefore is doesn’t understand hierarchies, so in XI 3.1 the unv would flatten the hierarchies for you can generate a flattened hierarchy as L00, L01, L02, L03, etc. There are some advantages to this approach, but there are obvious drawbacks as well.

With BusinessObjects v4.0, SAP rolled out the idea of a transient universe, such that if you wanted to create a WebIntelligence report you didn’t have to create a universe first. WebIntelligence would create a universe on the fly and a hierarchy is treated as a single column with the ability to use +/- to expand and collapse. (You can read more about WebIntelligence and BICS connectivity here.)

If  you try and convert an XI 3.1 universe based on a BEx query to a unx, it gives you the following error:

Now What?

The only 2 options I came up with to overcome this challenge were:

  • Use WebIntelligence (unv) to generate an Excel file and have Explorer index the xls file.
  • Leverage a Multi-Source relational connection to  connect to the the BEx cube and hierarchy relationally

Approach 1 – Excel Output

The approach I took here was to use the legacy unv file to create a WebI report.  I would then schedule this WebI report and generate an Excel file.  The Excel file would overwrite to ‘source’ Excel file of the Information Space.

To set this up, I created the WebI report with no header and a table that starts at position (0,0).  This table will easily export to Excel.

Sample WebI output prepped for Excel

Next, export the results to Excel 2007 format (xlsx)

Resulting WebI output in Excel

I then uploaded the xls file the BI Launchpad and used it as a source for my Information Space.

 

Once Explorer was able to generate the information space the way I wanted it, I was read to schedule the WebIntelligence report to Excel output.

Information Space based off an Excel file

Now, look at the properties of the Excel file, because we will need this when scheduling our WebIntelligence report.

Find the FRS location of the Excel File

I can now schedule WebIntelligence to run on a schedule and since I know the physical location of the Excel file in the FRS.

In my case the directory is: D:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\FileStore\Input\a_053\035\001\74549\   and the file name is:  work order excel-guid[e93a2669-0847-4cd2-b87d-390beb1c956c1].xlsx.

I can use that file name in the WebIntelligence scheduler and write over the source xlsx file for my information space.

When scheduling the file make sure and select the following:

  • Recurrence – chose the frequency that makes sense for you.
  • Formats – chose Excel.   (The default is always WebI.)
  • Destinations – chose File System.  Do NOT keep an instance in the history.  Use the xlsx directory and filename.

Select the correct parameters in the recurring schedule

As a last step, I would then schedule Explorer to reindex the Information Space after the file was changed and all will be good.

Scheduling Indexing for Explorer

Now the information space should be generated and everything should work just fine.

I must say that at first I really struggled to get this technique to work.  I’m not sure if my WebI output file was too large or if there was some other type of problem.  I spent hours (and hours and hours) trying to troubleshoot why this wasn’t working.  After speaking with technical support and having them attempt the same process, we thought that there was some type of incompatibility between the Excel files generated by WebIntelligence and the Excel format required by the Information Space for indexing.

There were a couple of different errors I encountered.  A few times I forgot to specify Excel as the output type so I got this error:

The only way I was able to fix the error was to restart my Explorer services and reschedule the WebI report and make sure it was exporting in Excel format.

Another error I got was when I didn’t properly exit the information space before attempting to reindex it.  In that case I would get an indexing error and see this when opened up the information space through “Manage Spaces”.

I found that this method of solving the issues was somewhat flaky.  Sometimes I found that after WebIntelligence overwrote the Excel file, Explorer is not longer able to index it.  It’ was very frustrating and technical support was only able to provide limited support because this is not the recommended solution for the problem.

So what did SAP recommend?  They suggested a much less elegant but more robust and fully supported approach — a multi-source universe.

Approach 2 – Multi-source Solution

This solution is less straightforward, but I was able to get it working and SAP says that this is the only solution that’s officially supported.

There are three things we need to do:

  1. Generate the flattened hierarchy lists and load them into another database (e.g. SQL Server)
  2. Read the SAP Note 1656905 about creating a unx universe from a BW Infocube
  3. Link the two systems via a multi-source connection

In order to generate the flattened hierarchy, you must use the legacy unv universe against your Infocube.  The ability to flatten a hierarchy is not available in a unx universe.  (SAP says that BICS is not there to flatten the hierarchy and there are no plans to enable it because then it’s no longer a hierarchy.  Bummer.)

Next, create a WebIntelligence report based on a legacy unv universe.  Add all levels of the hierarchy to the report and run the report.  Once you have the report, export the results to an Excel file and load them into a relational database.

I created a table called: tblPCHier

Flattened BW Hiearchy in SQL Server

Next, I imported the Excel output into my new database table:

BW Hierarchy is loaded in SQL Server

Note:  You need to watch out for accidental duplicate names a lower levels of the hierarchy.  Because WebIntelligence will automatically try and aggregate multiple values, you need to be aware that if the child nodes have the same name but a different parent value, the child nodes will roll up and display and aggregated value within Explorer.  If this is not what you want, then you will want to make sure that the child node names are unique.

Next we are ready to go into the IDT (Information Design Tool) and create our multi-source universe.  Follow the instructions listed in the SAP Note 1656905 to understand how to create a unx on top of a BW Infocube.

Once our BW star schema has been added to our data foundation, we can add another connection to our other datasource, the relational database, so we can bring in our hierarchy.

Lastly, join the column from the BEx fact table (SAP) to the key of my hierarchy table (SQL Server).

When our multi-source universe is complete we should see a connection to SAP, a connection to a relational database, a data foundation and a universe.

Completed unx components

Here is a preview of my hierarchy table from within the IDT:

View of flattened Hierarchy

So now we just need to verify that everything we need is in the universe.  The big challenge being that not everything from BEx is available in a unx.  Here are some of the things we lose when we go to a relational universe:

  • Calculated Key Figures
  • Restricted Key Figures
  • Variables
  • Conditions
  • Unit / Currency Conversion
  • Hierarchies (which we know about)
  • Custom Structures
  • Time Dependent Objects

I suggest you commit this list to memory.

In one case I had over 50 calculated key figures that I needed to map into Explorer and therefore recreating the logic in SQL was difficult and tedious.

In that case I had measures that included some time dependent calculations:

  • Total AR
  • Current AR, Over 30 days, Over 60 days, Over 90 days, Over 120 days
  • Current Debit, Debit over 30, Debit over 60,  Debit over 90,  Debit over 120
  • Current Credit, Credit over 30, Credit over 60 and Credit over 120

In BEx, I had implemented exit variables to do dynamic time calculations.  Now I need to do the same thing for Explorer.

To accomplish this, I built SQL Server Views which dynamically calculated values such as Last Day Previous Month and Last Day Previous Month Previous Year.  I could then use these dynamic calculates in my universe.

Equivalent userexit logic in SQL Server

Although I included these views in the data model, I did not need to join them to anything.

These views were simply used to dynamically generate a date value which was used to restrict the queries to the correct data elements.

Here is a look at the measures that were created in the universe (click on the image to enlarge):

Measures within the Universe

Here is a screenshot of the WHERE logic for “Total AR, Last Month”:

WHERE Logic for restricted key figure

Here is some of the logic spelled out.

WHERE logic for “Total AR” that leverages curDate()

@Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Posting date in the document) <= curDate()
OR
(Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) > curDate()
OR
Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) < {d ‘1900-01-01’}
)

WHERE logic for “Total AR, Last Month” that leveages Last Day Prev Month view

@Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Posting date in the document) <= @Select(SQL\V Last Day Prev Month\Last Day Prev Month)
OR
(Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) <= @Select(SQL\V Last Day Prev Month\Last Day Prev Month)
OR
Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) < {d ‘1900-01-01’}
)

If you have to do a lot of this type of time-based calculation logic, you might also want to review some of my previous blogs on the topic.  You don’t necessarily have to create views in the database to do the time calculations:
http://trustedbi.com/2009/12/29/timeseries1/
http://trustedbi.com/2010/01/06/timeseries2/
http://trustedbi.com/2010/01/18/timeseries3/

Cavet

This method of implementation is not for the faint hearted.  I can potentially mean a lot of work.

I would like to highlight some important considerations:

  • If your hierarchies are change on a regular basis, you will need to automate the updating of the SQL Server table which contains the hierarchy.
  • If you have a lot of calculated key figure which will need to be recreated within SQL.
  • Any logic you built into variables or user exits may need to be recreated within SQL.

Given these constraints it’s hard for me to endorse converting all your Explorer information spaces to BusinessObjects v4.0 without first understanding the complexity of your Infocubes. The good news is that SAP BusinessObjects v4.1 will overcome these limitations.

Try the Excel method first.  It’s worth a shot.

 BusinessObjects v4.1 to the Rescue

Recently in a What’s New in Explorer 4.1  ASUG call, SAP announced that in BusinessObjects 4.1, unv files will be supported.  This is great news.  Of course that begs the question.  If unx is the way forward, how will we flatten our SAP hierarchies in the future?

SAP BusinessObjects 4.1 is currently in ramp-up and the latest information on Service Marketplace says that it is targeted to exit ramp-up on November 10, 2013.  As always, this date is subject to change.

On additional piece of advice, if you are curious to learn about future releases and maintenance schedules, I suggest you check out this site on Service Marketplace: https://websmp108.sap-ag.de/bosap-maintenance-schedule Although these days are only planned dates, they are useful to have when planning system maintenance and upgrades.

Hope you’ve found this helpful.

«Good BI»

Categories: Lumira Tags: , , ,

ENTICE Users with Interactive Dashboards that work…

July 11th, 2013 No comments

I was delighted to see yet another extremely informative webinar from Antivia.  They partnered together with FedEx to deliver a compelling webinar that discusses how organizations can use dashboards to simplify the consumption of key corporate information.

Donald MacCormick kicked off the webinar with a reminder that we don’t always see what we think we see and that dashboards in their interactive variety aren’t really dashboards, but rather the future of end-user business intelligence, especially when it comes to delivering content to mobile devices.

Checkershadow Illusion by Edward Adelson (MIT)

Donald used the BBC Weather website and Yahoo!MarketDash iPad app as examples of interactive “ENTICE”-ing dashboards that are:

  • Easy to Use
  • No-Training Required
  • Task focused
  • Interactive
  • Connected
  • Engaging

FedEx ICE Project Results

ICE stands for In-Center Excellence and was deployed to help the organization improve performance across  FedEx Office (Kinkos) locations.

Andy Mills and Charlotte Huff discussed their use of SAP BusinessObjects and how together with the leadership from their new COO, Kim Dixon, they were able to transform the way the FedEx Office Personnel access critical business metrics and information.  They focused on leveraging dashboards to provide a ‘one stop’ for information coming from multiple systems.  They also make sure the information was actionable.  In addition, by leveraging the Sharepoint portal to allow users to collaboration and provide ‘kudos’, the team helped FedEx Office to create a positive energy with friendly competition and positive peer feedback.

Here is a video of the presentation… and listen to all the great ideas they used for generating BUZZ prior to the release.

What struck me was the emphasis that these interactive dashboards aren’t ‘done’, but rather continue to involve to include new elements of FedEx’s ongoing strategy.  As needs arise to include additional financial reporting or voice of the customer information, their dashboards are evolving as well.

Are You Building Relevant Analytics?

What was apparent from the results is that FedEx Office is forward thinking and putting together an evolving, future-proof analytics strategy.  Through their example’s like theirs many customers are leveraging a new type of dashboard that is built was the following important concepts in mind:

  • Dashboards Replacing Reports
  • Simple and Sophisticated
  • Wide Adoption (1000’s of users)
  • Lean / Agile / Iterative Approach
  • Feedback from End-users
  • Engaging BI
  • Actionable BI

BI is not there to let you know, it is there to let you act.   – Andreas Bitter

If you’d like to contact Antivia, visit their website and they’ll be in touch,  http://go.antivia.com/Contact.html

«Good BI»

BusinessObjects XI 3.1 SP6 full build available…

March 27th, 2013 14 comments

I was reviewing my SAP Support Portal Newsletter after returning from vacation and discovered that SAP has released SAP BusinessObjects Enterprise XI 3.1 Service Pack 06 – Full Build.  I really wasn’t expecting that — but I was happy to see it.  It sure beats worrying about the correct ‘upgrade path’ to such as the complications associated to moving to SP5.

XI 3.1 SP6 has released on all the standard platforms:  Linux on IA32 32bit,  HPIA_64,  HP-UX on PA-RISC 32bit,  AIX 32 Bit,  Solaris on SPARC 32bit.

Although I’ve been actively using v4.0 for a while, I know that many SAP Analytics customers haven’t yet made the transition to v4.0.

I would definitely like to hear from customers who are still waiting to make the move to find out where the hesitation is.  From what I understand most of the core stability issues have been resolved.

Highlights on XI 3.1 SP6

The big standout I saw was support for Windows Server 2012, which I had been asked about several times and I’m glad to see it’s there.

This release also fixes an outstanding issue with the length of the opendocument URL character limit of 2083 characters, which can be an issue if you are doing a lot of document to document hyperlinking.

If you are interested in additional details around what’s new, you can find that here:
http://help.sap.com/businessobject/product_guides/boexir31SP6/en/xi31_sp6_whats_new_en.pdf

You can also find the PAM document here:
https://websmp107.sap-ag.de/~form/sapnet?_SHORTKEY=00200797470000096350&_SCENARIO=01100035870000000202&

Other Notables for XI 3.1

Know your dates and deadlines.  You need to plan to be off XI 3.1 by the end of 2015 and that’s not very far away.

Release to Customer:09/30/2008
General Availability:09/30/2008
End of Mainstream Maintenance:12/31/2015
End of Priority One Support Phase:12/31/2017

It’s hard to believe it’s been 7 years already!  I’m feeling old.

Are You Getting Notifications?

If you have access to SAP Support, then you should also be taking advantage of regular alert notification emails that you can proactively receive from support.   You can sign up for them here:
http://service.sap.com/notifications

You can define exactly what products you want to receive notifications about.

Here is a video tutorial that walks you through it.

Don’t delay.  Sign up today.

«Good BI»

Using Explorer and Lumira with SAP BW

March 18th, 2013 No comments

This is a quick post to let you know that there is an excellent whitepaper available which explains everything you need to know about leveraging Explorer and Visual Intelligence with SAP BW.

Organizations must leverage some type of acceleration technology – either HANA or BWA.

Here is the original article:
http://www.saphana.com/docs/DOC-2943

Here is a link to the must-read technical document:
http://www.saphana.com/servlet/JiveServlet/download/2943-3-9226/SAP%20VI%20and%20Explorer%20on%20BW%20powered%20by%20SAP%20HANA%20v2.pdf

This document outlines four different scenarios, showing the different implications for SAP BusinessObjects Explorer and SAP Visual Intelligence.

  1. SAP NetWeaver BW standalone without SAP NetWeaver BW Accelerator (BWA) and without SAP HANA
  2. SAP NetWeaver BW in combination with SAP NetWeaver BW Accelerator (BWA)
  3. SAP NetWeaver BW with SAP HANA, DB edition
  4. SAP NetWeaver BW with SAP HANA, Full Use Editions (allows write-back)

In order to leverage Scenario 3 or 4 you must be on:

  • SAP HANA 1.0 SP5 or higher
  • SAP HANA Modeler 1.0 SP4 Rev 37 or higher
  • SAP BusinessObjects Explorer 4.0 SP4 or higher
  • SAP Visual Intelligence 1.0 SP4 or higher
  • SAP Netweaver BW 7.3 SP7 or higher (with SAP Notes:  1703061, 1759172, 1752384, 1733519, 1769374 and 1790333)
When reading the whitepaper, make sure that you read through to the end.  Pages 13-15 provide important cavets and new roadmap information.

«Good BI»

Categories: BI Platform, HANA, Lumira Tags: , ,

Building a Predictive Model

March 6th, 2013 7 comments

Last year SAP launched a new predictive solution for the BusinessObjects Business Intelligence Platform.  Prior to 2012, SAP had partnered with SPSS to provide predictive functionality; however once SPSS was acquired by IBM, it was time for SAP to develop their own solution.  This gave birth to Predictive Analysis.

In version 1.0, Predictive Analysis was built leveraging the Eclipse framework.  Since then, the Predictive Analysis Interface has been merged with Visual Intelligence and provides customers with a  powerful predictive analytics and visualization solution.

When Do I Use Predictive?

Predictive Analysis should be used anytime you need leverage a predictive algorithm to get additional insights based on statistical modelling.  Here is the wiki site which talks about predictive analysis in more detail.

Since Utilities is my primary area of focus, here a quick post I wrote about the different Predictive Use Cases for Utilities.  Predictive Analysis is widely used in most industry verticals for different use cases.  It is probably most widely used in retail as organizations look for ways of increase their share of the customer wallet.

(Here is more  information on my BI decision tree.)

How to Build a Predictive Model

Using Predictive Analysis is extremely easy.  Organizations can easily access corporate data and use that corporate data to gain new insights that allow them to decrease cost and increase profitability and efficiency.

In the case, I would like to see how I might predict the propensity of my transformers to failure.  I will look at historical and current asset data together with failure rates and see if I can predict which assets are most prone to failure.

Connecting to the Data

With Predictive Analysis you can access any data from any of your corporate or local data sources.  In my case I’ll be using a local Excel spreadsheet which contains the key failure data I’ll need for this exercise.  If you’d like you can download it here:
http://trustedbi.com/images/blog/pa/Transformer_Failures.zip

Step 1 – Connect to Data

Although I can pull information from a Universe or HANA, I am selecting an Excel spreadsheet called Transformer Analysis.

Step 2 – Select the Excel data source

Now I can preview the data and confirm I have the correct data set.  I can choose to exclude columns if I wish to.

Step 3 – Preview and Read the data into Predictive Analysis

Once the data has been loaded, you can begin to look through the data, visualize it and analyze if for additional insights.  In our data set, the key columns we will be looking at are:

  • Status:  OK/Failure – what is the current status of this asset
  • Attribute Fields:  VegMgmt, Overloads, PM Late, Miles From Ocean, etc. – these are fields that will help us determine if we can predict future failure
Step 4 – Now we have full capabilities to Manipulate and Visualize the Data

Building the Model

Next we will want to begin to do some analysis or we can move directly into Predictive Analysis.  Since this is not a demo of Visual Intelligence, I will simply move directly into predictive mode by changing my perspective and selecting Predict:
Step 5 – Change Prespective from Prepare to Predict

Now that I am in the Predict perspective, I can select from any number of prebuilt functions.  A short overview of what these functions are used for can be found here.  Here is a screenshot of those functions:

PreBuilt Predictive Functions (Click for full size)

 For this predictive exercise, we will use a Decision Tree.  We will use the attributes of the data to see what influences or predicts the likelihood of failure.  Double click on the R-CNR Tree and it will be added to the workspace.

Step 6 – Select the R-CNR Algorithm

Next, hover over the R-CNR algorithm and select the properties tab.

Step 7 – Modify the Properties

The properties dialog for the R-CNR Tree algorithm will appear.  Here we want to specify which column we are trying to predict and which fields will influence that prediction.  Therefore we will specify Status as the independent column.  All the remaining columns will be influencers.  In my example I selected:   Status, VegMgmt, PMLate, Overloads, MilesFromOcean, Manufacturer, WaterExposure, MultipleConnects, Storm, AssetType, Repairs.

I also changed the name of the “Newly Added Columns” from PredictedValues to PredictedStatus .

Step 8 – Fill in the Parameters for the R-CNR algorithm

Once the values have been selected, we will be ready to run the algorithm.

NOTE:  I did find that when using the algorithm, there is a
bug in the R algorithm and the column names cannot contain a space
in them.  If they do, you will get an error when you try and run
this algorithm.

Running the Model

What’s great about predictive analysis is that as an analyst, you can build your predictive workflow one step at a time.  After each step you have the option of running your algorithm. up so that point.  You can use either the “Run Till Here” button on the individual step in the analysis process OR you can press the green arrow to run the entire workflow.

Step 9 – Run the Algorithm

After running the algorithm successfully, we will want to view the results.

Step 10 – Success!  Now Let’s look at the Results

Once we click on the results tab, we’ll be able to see any new content that’s been created.  In our case, we will see there is a new column which has been created called, PredictedStatus.

Step 11 – Seeing the Results

Now if we want to drill into the algorithm results generated by Predictive Analysis we can do that as well.

Step 12 – View the Charts

  In the case of a R-CNR algorithm, we can see the graphical decision tree which represents the algorithm.

Step 13 – Decision Tree Graphic (Click on link to see full size)

You may not be able to see the entire decision tree on your screen based on the resolution of your screen and the number of levels generated by the algorithm.  There is a limit to how many times you can zoom out in order to see the graphic.

If you would rather see the results of the algorithm in text format you can do that as well.

See Algorithm result in Text format

Analyzing the Results

Once I’ve completed all the steps in the process, we can continue to visualize data by leveraging any new columns that have been generated by the output of the workflow.  In this case, we have generated one additional column called, PredictedStatus.  This can now be used in analysis.

Click on the Visualize (which is next to the Charts button) and we will switch to visualization mode.

Step 14 – Visualize the Results (Click to see full screen)

In this case, we pulled in Status, Predicted status, and created a Count measure based off the AssetId field.  In the visualization above, we are comparing predicted failures vs. actual failures.

In this case we can see that of the assets that are currently OK, 290 of them are predicted to fail!

That’s Insight.  That’s Powerful.  That’s Predictive Analysis.

«Good BI»

Categories: BI Platform Tags: