Archive

Archive for the ‘Lumira’ Category

Lumira 1.29 – Data Blending Comes of Age

November 23rd, 2015 1 comment

Lumira recently introduced a new concept called BlendingBlending allows Analytics to allow data from multiple datasets or fact tables to be displayed across a common dimension.  This was a critical next step in Lumira’s evolution because often the job of an Analyst requires them to pull together multiple result sets into a single visualization.  It’s great to be able to visualize actual vs. budget together in a single graph; however often you want to be able to create a new calculation across the datasets, e.g. variance,  variance percent, etc.  This was not possible before.

Blending Enhancements

On Friday I downloaded the latest release of Lumira — Lumira 1.29.  There is a lot here to see and although it’s always great to see more features coming into the product, the enhancements to blending really make this release standout and make this version of Lumira a must-have upgrade.

Lumira has always done a good job at allowing users to manipulate data, but limitations of no micro-cube and the inability to create custom calculations across multiple data sets was extremely limiting.  Until now the only options were to:

  • Perform data preparation in the microcube of WebIntelligence and access to microcube directly using the APOS Data Gateway
  • Hope you could leverage the Join_By_SQL option within the Universe so that only a single result set would be returned.

As you might imagine, this was quite limiting and I ran into this first hand during a recent customer evaluation.  Here was my scenario.

Supporting YoY Comparisons

I was asked to calculate the growth factor within a dataset that included data over time.  The growth factor was defined as

G = (Volumecy/ Volume(cy-t))1/t – 1

Where t is the time in years. For the purpose of this analysis, t is taken to equal 3, 5 or 10 years.  cy is the current year.  The example below will use the standard year breakdown.

Example:  The table below is used in the example calculations

 2014 Volume 2013 Volume 2012 Volume 2011 Volume 2010 Volume
125,452 124,118 120,506 119,987 126,623

In 2014, the 3 year growth would be:

G = (125,452 / 119,987)1/3 – 1 = 1.5 %

In 2013, the 3 year growth would be:

G = (124,118 / 126,623)1/3 – 1 = -0.66 %

At first what seems so simple, wasn’t.

This is a straight-forward request.  With Lumira it wasn’t so simple.  When processing a Lumira document, the formula engine processes one record at a time, so prior to blending, the only option was to create custom calculations to filter the volume for each year.  I would then have to use these custom calculations in yet another calculation for the growth.  Here is a sample.

2014 Volume Custom Calculation:   if {Year} = 2014 then {TransCount} else 0

2011 Volume Custom Calculation:   if {Year} = 2011 then {TransCount} else 0

2014 3yr Growth:  Power(({Transcount_2014} / {Transcount_2011}), 1 / 3) – 1

So for 5 years of 3-yr grow calculations I would have to create 15 formulas!  Yikes!

It was possible, but there were two big problems:

  1. Each year you would have to add more formulas because the calculations weren’t dynamic by year.
  2. You lost the ability to visualize the data in context.  In this case you would not be able to easily show these calculations in a YoY line chart.

Could blending be the answer?  Possibly.  Here’s what I discovered.

Previous Blending Helped But…

Suddenly I had an idea.  What if I loaded the dataset in twice?  Once for the current year and again but leverage “Year+3”.  This would allow me to join on year and because the value for the year 2011 now say 2014.  I could then display the current years value next to the value from 3 years ago.  It worked!

Here is what I did.

I loaded the same dataset twice and renamed them to Current Year and CY-3.

Two imported datasets

In the dataset named, CY-3, I created a calculated dimension called Year+3 in which I added “+3” to the value of the Year column… so now 2014 became 2017, 2013 became 2016, etc.

Year+3 Calculated Field

In the previous example when I created Year+3 and added 3 to the Year, the field went from being defined as an integer to a real number.  Lumira doesn’t allow you to join across data types and since my original Year column is still defined as an integer, I needed to create a current date which is also a real number.  I accomplished this by creating a new calculation dimension called Year+0, by creating a new formula Year+0.

Year+0 Calculated Field

Next I lined the two datasets.  I was able to link Year+3 from the CY-3 dataset to the Year+0 from the CurrentYear dataset.

Dataset Linking

I also renamed the TransCount field in the CY-3 dataset to Transcount (CY-3) to avoid any confusion with the value TransCount in the CurrentYear dataset.

Next I created a column chart visualization showing the data from CurrentYear and the CY-3 datasets together in the same chart in the following manner:

First, Add Year+0 and TransCount to the Column chart.  This shows you the current values.  Next, change the dataset you are working with to CY-3 and add TransCount (CY-3) to the output.  Now I can see the CurrentYear Transcount compared to the Transcount from 3 years earlier.  As expected, there are null values for the earliest 3 years because I loaded the same data set twice and my initial calculation is using Year+3.

Viewing YoY Volume (TransCount)

In other words, if I hover over the value for 2004 I will see it matches the Transcount(CY-3) value for 2007.

Next I want to Exclude the 2004 through 2006 numbers.

Excluding Years with null values

So now I am left with the correct bar chart.

Viewing YoY Volume (with nulls excluded)

The next step is to now do some math between the two datasets.  For example I wanted to calculate the variance.  What is the difference between CurrentYear and CY-3?  Nope.  I needed to calculate the growth rate using the formula introduced above.  The problem was in Lumira 1.28, there was no way to do this. Now you can!

Custom Calculations Across Datasets

As soon as I got my hands on Lumira 1.29 I was able to find the new Custom Calculation feature two places.  You can either:

  • Use the menu at the top of the charting area
  • Right Click on the measure and select Add Calculation >> Custom Calculation
 
From the Menu From the Measure

So let’s create this new custom calculation.  Below you can see where I’ve highlighted a new and critical change.  You can now select the “Dataset” of the fields you want to use in the calculation.  This allows me to choose measure elements across both data sources:

Ability to select either dataset

So now I can create the following new growth calculation:

Power({DS1.TransCount}/{DS2.Transcount (CY-3)},1/3) – 1

Once I add this new YoY calculation to the chart and remove the previous measure values I can see the following results:

Dynamic 3yr Growth YoY

Now I have a dynamic growth calculation and I didn’t have to create dozens of formulas!  It works beautifully.

Now the only existing limitation is that I cannot show the 3yr growth and the 5 yr growth in the same chart because each of them is linked to a different combination of results.  (Year linked to 3Year) and (Year linked to 5Year).  There is currently no way to linked more than two data sets together.

Exercise for You

If you would like to try this exercise out for yourself, I have attached the associated data file here.  All you will need to do is load the dataset twice and then create the calculations as described above.

Now I’d like you to calculate the growth of Volume using the Growth Formula stated above, but this time instead of doing it for 2 yrs, do it for 5 yrs.  Once you’re done, you should get a chart like this:

Dynamic 5yr Growth YoY

Wrap-up

Lumira 1.29 is a huge leap forward.  The ability to do data blending and create new dynamic calculations across multiple datasets is very important capability indeed.  This goes a long way to remove many of the limitations you may have encountered around data preparation and analysis thus far.

So the next question is… when will they be releasing Lumira 2.0? 🙂

Enjoy!

«Good BI»

Lumira | Predictive Analysis Troubleshooting: Hangs on the splash screen when starting…

October 1st, 2014 1 comment

I recently discovered an issue when leveraging both Lumira and Predictive Analysis 1.18 and it took me a while to debug the issue and so I wanted to share those results with you.

The version of Lumira/Predictive Analysis I had installed on my desktop had a permanent key with a timeout and I didn’t realize it but that the key had expired.

Keycode prior to expiration

As a result, when I started Lumira, I saw the splash screen with the updates that said, starting engine then preparing user interface.  Eventually it would simply hang on the splash screen with the small message that said:  Loading module “Publish to SAP HANA”, it is at this point that Lumira hangs and no new messages appear.

SAP Lumira hangs with “Publish to SAP HANA”

In order to resolve this issue you will need a new keycode or you will be reverted back to the SAP Lumira Personal Edition

SAP Lumira Resolution

In order to resolve this issue for SAP Lumira, you need to do the following:

  • Delete the folder:  C:\Users\Public\sapvi
  • Launch Lumira
  • Enter a new keycode.

NOTE:  If you do not have a valid keycode, then I suggest you:

SAP Predictive Analysis Resolution

The instructions are the same as for SAP Lumira above except that the directory you will delete is C:\Users\Public\sappa

I trust this will resolve your issues.

I don’t know about you but I’m really enjoying Lumira.  For that latest features and information about Lumira check out these youtube videos.

«Good BI»

 

Categories: Lumira Tags:

Migrating BEx-based Information Spaces to BusinessObjects 4.0

August 8th, 2013 3 comments

Sometimes in software it can feel that you take two steps forward and one step back.

One example of this is with BEx-based Explorer Information Spaces when migrating from XI 3.1 to BO 4.0. In 4.0, SAP removed the ability to create information spaces against the legacy universe (unv) and instead required everyone to use the new universe format (unx).

On the surface this lack of support for the legacy universe didn’t seem to really be a problem because BusinessObjects 4.0 provides a painless way to migrate universes from the legacy unv to unx format.

What I didn’t realize initially until a few months ago was that this WAS be a big problem.  A big problem for customers who were creating Information Spaces based on BEx queries and Infocubes.

I’d like to share with you my journey to understand what is really required to move my Bex-based Information Spaces to BusinessObjects v4.0.

Explorer, BEx and Unx

Explorer is NOT an OLAP-aware product, therefore is doesn’t understand hierarchies, so in XI 3.1 the unv would flatten the hierarchies for you can generate a flattened hierarchy as L00, L01, L02, L03, etc. There are some advantages to this approach, but there are obvious drawbacks as well.

With BusinessObjects v4.0, SAP rolled out the idea of a transient universe, such that if you wanted to create a WebIntelligence report you didn’t have to create a universe first. WebIntelligence would create a universe on the fly and a hierarchy is treated as a single column with the ability to use +/- to expand and collapse. (You can read more about WebIntelligence and BICS connectivity here.)

If  you try and convert an XI 3.1 universe based on a BEx query to a unx, it gives you the following error:

Now What?

The only 2 options I came up with to overcome this challenge were:

  • Use WebIntelligence (unv) to generate an Excel file and have Explorer index the xls file.
  • Leverage a Multi-Source relational connection to  connect to the the BEx cube and hierarchy relationally

Approach 1 – Excel Output

The approach I took here was to use the legacy unv file to create a WebI report.  I would then schedule this WebI report and generate an Excel file.  The Excel file would overwrite to ‘source’ Excel file of the Information Space.

To set this up, I created the WebI report with no header and a table that starts at position (0,0).  This table will easily export to Excel.

Sample WebI output prepped for Excel

Next, export the results to Excel 2007 format (xlsx)

Resulting WebI output in Excel

I then uploaded the xls file the BI Launchpad and used it as a source for my Information Space.

 

Once Explorer was able to generate the information space the way I wanted it, I was read to schedule the WebIntelligence report to Excel output.

Information Space based off an Excel file

Now, look at the properties of the Excel file, because we will need this when scheduling our WebIntelligence report.

Find the FRS location of the Excel File

I can now schedule WebIntelligence to run on a schedule and since I know the physical location of the Excel file in the FRS.

In my case the directory is: D:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\FileStore\Input\a_053\035\001\74549\   and the file name is:  work order excel-guid[e93a2669-0847-4cd2-b87d-390beb1c956c1].xlsx.

I can use that file name in the WebIntelligence scheduler and write over the source xlsx file for my information space.

When scheduling the file make sure and select the following:

  • Recurrence – chose the frequency that makes sense for you.
  • Formats – chose Excel.   (The default is always WebI.)
  • Destinations – chose File System.  Do NOT keep an instance in the history.  Use the xlsx directory and filename.

Select the correct parameters in the recurring schedule

As a last step, I would then schedule Explorer to reindex the Information Space after the file was changed and all will be good.

Scheduling Indexing for Explorer

Now the information space should be generated and everything should work just fine.

I must say that at first I really struggled to get this technique to work.  I’m not sure if my WebI output file was too large or if there was some other type of problem.  I spent hours (and hours and hours) trying to troubleshoot why this wasn’t working.  After speaking with technical support and having them attempt the same process, we thought that there was some type of incompatibility between the Excel files generated by WebIntelligence and the Excel format required by the Information Space for indexing.

There were a couple of different errors I encountered.  A few times I forgot to specify Excel as the output type so I got this error:

The only way I was able to fix the error was to restart my Explorer services and reschedule the WebI report and make sure it was exporting in Excel format.

Another error I got was when I didn’t properly exit the information space before attempting to reindex it.  In that case I would get an indexing error and see this when opened up the information space through “Manage Spaces”.

I found that this method of solving the issues was somewhat flaky.  Sometimes I found that after WebIntelligence overwrote the Excel file, Explorer is not longer able to index it.  It’ was very frustrating and technical support was only able to provide limited support because this is not the recommended solution for the problem.

So what did SAP recommend?  They suggested a much less elegant but more robust and fully supported approach — a multi-source universe.

Approach 2 – Multi-source Solution

This solution is less straightforward, but I was able to get it working and SAP says that this is the only solution that’s officially supported.

There are three things we need to do:

  1. Generate the flattened hierarchy lists and load them into another database (e.g. SQL Server)
  2. Read the SAP Note 1656905 about creating a unx universe from a BW Infocube
  3. Link the two systems via a multi-source connection

In order to generate the flattened hierarchy, you must use the legacy unv universe against your Infocube.  The ability to flatten a hierarchy is not available in a unx universe.  (SAP says that BICS is not there to flatten the hierarchy and there are no plans to enable it because then it’s no longer a hierarchy.  Bummer.)

Next, create a WebIntelligence report based on a legacy unv universe.  Add all levels of the hierarchy to the report and run the report.  Once you have the report, export the results to an Excel file and load them into a relational database.

I created a table called: tblPCHier

Flattened BW Hiearchy in SQL Server

Next, I imported the Excel output into my new database table:

BW Hierarchy is loaded in SQL Server

Note:  You need to watch out for accidental duplicate names a lower levels of the hierarchy.  Because WebIntelligence will automatically try and aggregate multiple values, you need to be aware that if the child nodes have the same name but a different parent value, the child nodes will roll up and display and aggregated value within Explorer.  If this is not what you want, then you will want to make sure that the child node names are unique.

Next we are ready to go into the IDT (Information Design Tool) and create our multi-source universe.  Follow the instructions listed in the SAP Note 1656905 to understand how to create a unx on top of a BW Infocube.

Once our BW star schema has been added to our data foundation, we can add another connection to our other datasource, the relational database, so we can bring in our hierarchy.

Lastly, join the column from the BEx fact table (SAP) to the key of my hierarchy table (SQL Server).

When our multi-source universe is complete we should see a connection to SAP, a connection to a relational database, a data foundation and a universe.

Completed unx components

Here is a preview of my hierarchy table from within the IDT:

View of flattened Hierarchy

So now we just need to verify that everything we need is in the universe.  The big challenge being that not everything from BEx is available in a unx.  Here are some of the things we lose when we go to a relational universe:

  • Calculated Key Figures
  • Restricted Key Figures
  • Variables
  • Conditions
  • Unit / Currency Conversion
  • Hierarchies (which we know about)
  • Custom Structures
  • Time Dependent Objects

I suggest you commit this list to memory.

In one case I had over 50 calculated key figures that I needed to map into Explorer and therefore recreating the logic in SQL was difficult and tedious.

In that case I had measures that included some time dependent calculations:

  • Total AR
  • Current AR, Over 30 days, Over 60 days, Over 90 days, Over 120 days
  • Current Debit, Debit over 30, Debit over 60,  Debit over 90,  Debit over 120
  • Current Credit, Credit over 30, Credit over 60 and Credit over 120

In BEx, I had implemented exit variables to do dynamic time calculations.  Now I need to do the same thing for Explorer.

To accomplish this, I built SQL Server Views which dynamically calculated values such as Last Day Previous Month and Last Day Previous Month Previous Year.  I could then use these dynamic calculates in my universe.

Equivalent userexit logic in SQL Server

Although I included these views in the data model, I did not need to join them to anything.

These views were simply used to dynamically generate a date value which was used to restrict the queries to the correct data elements.

Here is a look at the measures that were created in the universe (click on the image to enlarge):

Measures within the Universe

Here is a screenshot of the WHERE logic for “Total AR, Last Month”:

WHERE Logic for restricted key figure

Here is some of the logic spelled out.

WHERE logic for “Total AR” that leverages curDate()

@Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Posting date in the document) <= curDate()
OR
(Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) > curDate()
OR
Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) < {d ‘1900-01-01’}
)

WHERE logic for “Total AR, Last Month” that leveages Last Day Prev Month view

@Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Posting date in the document) <= @Select(SQL\V Last Day Prev Month\Last Day Prev Month)
OR
(Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) <= @Select(SQL\V Last Day Prev Month\Last Day Prev Month)
OR
Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) < {d ‘1900-01-01’}
)

If you have to do a lot of this type of time-based calculation logic, you might also want to review some of my previous blogs on the topic.  You don’t necessarily have to create views in the database to do the time calculations:
http://trustedbi.com/2009/12/29/timeseries1/
http://trustedbi.com/2010/01/06/timeseries2/
http://trustedbi.com/2010/01/18/timeseries3/

Cavet

This method of implementation is not for the faint hearted.  I can potentially mean a lot of work.

I would like to highlight some important considerations:

  • If your hierarchies are change on a regular basis, you will need to automate the updating of the SQL Server table which contains the hierarchy.
  • If you have a lot of calculated key figure which will need to be recreated within SQL.
  • Any logic you built into variables or user exits may need to be recreated within SQL.

Given these constraints it’s hard for me to endorse converting all your Explorer information spaces to BusinessObjects v4.0 without first understanding the complexity of your Infocubes. The good news is that SAP BusinessObjects v4.1 will overcome these limitations.

Try the Excel method first.  It’s worth a shot.

 BusinessObjects v4.1 to the Rescue

Recently in a What’s New in Explorer 4.1  ASUG call, SAP announced that in BusinessObjects 4.1, unv files will be supported.  This is great news.  Of course that begs the question.  If unx is the way forward, how will we flatten our SAP hierarchies in the future?

SAP BusinessObjects 4.1 is currently in ramp-up and the latest information on Service Marketplace says that it is targeted to exit ramp-up on November 10, 2013.  As always, this date is subject to change.

On additional piece of advice, if you are curious to learn about future releases and maintenance schedules, I suggest you check out this site on Service Marketplace: https://websmp108.sap-ag.de/bosap-maintenance-schedule Although these days are only planned dates, they are useful to have when planning system maintenance and upgrades.

Hope you’ve found this helpful.

«Good BI»

Categories: Lumira Tags: , , ,

Using Explorer and Lumira with SAP BW

March 18th, 2013 No comments

This is a quick post to let you know that there is an excellent whitepaper available which explains everything you need to know about leveraging Explorer and Visual Intelligence with SAP BW.

Organizations must leverage some type of acceleration technology – either HANA or BWA.

Here is the original article:
http://www.saphana.com/docs/DOC-2943

Here is a link to the must-read technical document:
http://www.saphana.com/servlet/JiveServlet/download/2943-3-9226/SAP%20VI%20and%20Explorer%20on%20BW%20powered%20by%20SAP%20HANA%20v2.pdf

This document outlines four different scenarios, showing the different implications for SAP BusinessObjects Explorer and SAP Visual Intelligence.

  1. SAP NetWeaver BW standalone without SAP NetWeaver BW Accelerator (BWA) and without SAP HANA
  2. SAP NetWeaver BW in combination with SAP NetWeaver BW Accelerator (BWA)
  3. SAP NetWeaver BW with SAP HANA, DB edition
  4. SAP NetWeaver BW with SAP HANA, Full Use Editions (allows write-back)

In order to leverage Scenario 3 or 4 you must be on:

  • SAP HANA 1.0 SP5 or higher
  • SAP HANA Modeler 1.0 SP4 Rev 37 or higher
  • SAP BusinessObjects Explorer 4.0 SP4 or higher
  • SAP Visual Intelligence 1.0 SP4 or higher
  • SAP Netweaver BW 7.3 SP7 or higher (with SAP Notes:  1703061, 1759172, 1752384, 1733519, 1769374 and 1790333)
When reading the whitepaper, make sure that you read through to the end.  Pages 13-15 provide important cavets and new roadmap information.

«Good BI»

Categories: BI Platform, HANA, Lumira Tags: , ,

Visual Intelligence – Resolving Start Up Issues

January 10th, 2013 No comments

Enjoying Visual Intelligence?  I am.  Unfortunately however every once in a while something will go wrong.  Most of the time stopping and restarting Visual Intelligence will fix the problem but sometimes not.

One error I received recently was:  Open document failed / The engine failed to start before timeout.  Restart the application.  (HDB 10005)

Open document failed / The engine failed to start before timeout. Restart the application. (HDB 10005)

Here is a quick article about this topic from SCN about this error:  http://scn.sap.com/thread/3256250

Here is an SAP note which also has additional details:  http://service.sap.com/sap/support/notes/1783894

These articles provide good information but didn’t solve my problem.

Sybase IQ Hiccups

As you probably already know that Visual Intelligence leverages an embedded Sybase IQ engine for data manipulation.

Here you can see both SAPVisualIntelligence.exe and the embedded Sybase engine, iqsrv15.exe, listed in the task manager.

… and sometimes there are problems.  I’ve found that assuming it ‘normally’ works okay (install was correct) and nothing else on your PC has changed (software/firewall conflicts), then it’s a memory issue.  Remember 8 Gig is recommended for this application.  (4 Gig minimum)

The problem is that sometimes when the Sybase Engine hiccups, it may continue to run in the background and make it impossible for Visual Intelligence to start-up again.

Resolving Issues with iqsrv15.exe

Here is what you need to do:

  1. After you shutdown Visual Intelligence, make sure that iqsrv15.exe is not still running.  You can kill the process via task manager if necessary.
  2. Delete the DataBase directory where Hilo.db is stored
  3. Restart application and it will recreate.
You will find the DataBase directory in your working directory.  By default this is located here:
C:UsersAdministratorAppDataLocalSAPSAP Visual Intelligence

Location of local Visual Intelligence Database

Hope that helps!

«Good BI»

 

Categories: Administrators, Help!, Lumira Tags: ,