Posts Tagged ‘How To’

Embracing the Cloud for Analytics – Part 3

May 19th, 2017 No comments

In the first two blogs we were introduced to SAP Leonardo with:

Now we want to connect our data to the SAP Analytics Cloud (aka BusinessObjects Cloud).  We will do this in four key steps:

  1. Sign up for our SAP Analytics Cloud Trial Account
  2. Make our data accessible
  3. Connect to Our data
  4. Create a Story within the SAP Analytics Cloud

The SAP sales team has been running a number of HANA DBaaS Academies in which they can come onsite and walk your team through these steps so that by the end of the day your team will have hands-on experience with loading data into the SAP Cloud Platform and visualizing that data with the SAP Analytics Cloud.  If you are interested in this for your team, you should reach out to your SAP sales rep.

SAP Analytics Cloud Trial Account

As with the SAP Cloud Platform, SAP is keen to have people try out their SAP Analytics Cloud capabilities and provide feedback.  Already the Analytics Clouds have received a number of wave updates.  As of the writing of this blog new capabilities are still being released every two weeks.  If you are interested in reading about the latest capabilities, Carmen Choy has an excellent series of blog posts.

To sign up for your free SAP Analytics Cloud 30-day trial and in less than a minute you will have a trial account configured.  All you need to do is go to their trial website and put in your email address.  That’s it.

You will notice here that some of the branding still says SAP BusinessObjects Cloud below.  Although the announcement of the name change from SAP BusinessObjects Cloud to SAP Analytics Cloud has been announced, not all of the internal systems have fully made the switch.  As time goes on you should expect this branding to be updated to SAP Analytics Cloud.

After you put in your email address you will see two quick screens

After putting in my email address… SAP running SIMPLE

… and then..

An Email Gets Generated

… done.  That’s what you call Software As a Service!

In your Inbox you will get an email telling you to activate your count.  The 30-day countdown starts as soon as you begin the activation.  I wish they would limit the trial a different way rather than by a time limit, but for now that’s all we get.  Fortunately by owning my own domain name I have access to unlimited email addresses.  🙂

This is what the activation will look like

Go ahead and click on the URL and activate your account.  The URL will take you to a login page

All you need to do is provide a password, the other fields are read-only

Once you click save, the system should return with the following message.

Message Once Activation is Complete

Now that we have our account activated let’s return to HANA and get our data ready.  We need to make our data accessible to the SAP Analytics Cloud be exposing it as a HANA Calculation View, so let’s do that now.

Make The Data Accessible

This next step can be done by either the WebIDE or the Eclipse client.  Since showed you how to configure the Eclipse client earlier we will use that.

Before you begin this step, confirm that you SAP HANA database is running by logging into your SAP Cloud Platform and verifying the status.  If necessary start the database.  (Remember to refresh the screen since the screen does not refresh automatically.)

We want to create a Calculation View.  The Calculation View is exposed
through the _SYS_BIC directory and therefore is accessible to external
BI tools.  In our case we are going to create a Package and call it demo.

Right click on the Schema and choose New Package from the menu.

Right-Click and Create New Package

Next, provide the package a name, e.g. demo

Name the Package

After naming the package we want to create a New Calculation View.  We do this by right-clicking on the package and selecting Calculation View… from the menu.

Right-Click Create New Calculation View

If you are unfamiliar with how to create calculation views,
you might wish to review to this helpful video.  Although
the terminology is changing and SAP no longer differentiates
between Analytics Views and Calculation Views, it is still help
to watch how a Calculation view is created.

I will assume you are using the dataset from my previous blog, so let’s create a calculation view called, CV_INSURANCESALES.  We want this Calculation View to be a standard cube, so we can accept the default settings.

Enter a name and label (Description)

First, drag the table Insurance directly into the Aggregation box since we don’t need to join it to any other tables.

Drag the SYSTEM.Insurance Table into the Aggregation

This is what you should see:

All Available Columns will appear

Next, in the area labeled Details, select the columns you wish to expose as dimensions.  You can multi-select them by holding down the ctrl key and clicking on each one.  Once you are complete, right click and choose Add to Output.

Add the Dimensions To Output

Next, select the columns you wish to expose as measures. Highlight them and then right click and choose Add to Aggregated Output.   By default all our dimensions aggregate as SUM, so no further changes are required.

Select the Measures and right-click Add As Aggregate Output

Now all our dimensions and measures should appear in the right panel.

The Dimensions and Measures Now Appear in the Output

We are almost done.  The next step is to click on the Semantics Box .  The contextual Details area will change accordingly.  As a last step demo in the design of the model, we need to select each of the dimensions and categorize it as a dimension.

Click on the Semantics box and to see the exposed data elements

Highlight each of the Dimensions (all the items without an orange ‘measure’ icon in the leftmost column), and click on the very small blue square in the menu bar.

Select the Dimensions and click the blue square

Now the final step is the Save and Activate our model.  This will validate the model and activate it so that it becomes part of the _SYS_BIC schema within HANA.  To activate your model, click on the dark green circle in the top right menu.

Click on the button or choose the value from the drop down

As the model runs, you should see the log files update below.

After click Save & Activate, you will see the Job Log update below


Once you see the message Completed successfully, you are ready to consume this model from with SAP Analytics Cloud.

If you are interested in watching a similar exercise in video form, check out this youtube video.

Connect To Your Data

Let’s log into our SAP Analytics Cloud environment.  You may notice from to time a delay in the time it takes from when you click on something to the time you see the screen update. I have noticed that SAP Analytics Cloud isn’t the best at telling you when “it’s working”, so if in doubt give it some time to paint the screen – especially when it’s doing something for the first time.  If you are using a trial account, it’s hard to know what the URL will be that you have been assigned.  In my case I was assigned a trial account on a tenant named:, so my URL will be.

Based on the rebranding of SAP BusinessObjects Cloud to
SAP Analytics Cloud, the base URL made change.

This is the URL that you will want to bookmark, not the URL that asks for your username and password.  The authentication process is handled by a separate identity server that is not necessarily running within the same namespace as your SAP Analytics Cloud account.

Once you have logged in with your username (email) and password, you should see the following welcome screen:

SAP Analytics Cloud Welcome Screen

By default your account will be loaded with a sample set of data and content.  Feel free to review these stories and get familiar with the SAP Analytics Cloud navigation.  The first thing we need to do is to use SAP Analytics Cloud to create a connection to our HANA data running within the SAP Cloud Platform.  This is very easy to do.  Simply select Create Connection from the menu:

Create a Connection to a Data Source

From here we will decide which type of connect we want to create.  Today there are “Live Connections” available for both HANA and SuccessFactors and many more will be added in the future.  However if you want additional live connectivity today, I would recommend you check out SAP’s partner APOS provides live connectivity to other data sources such as Microsoft SQL Server, etc. with it’s Data Gateway for SAP Analytics Cloud.

On the right hand side you will see a (+) plus button.  Click this to add a new HANA Connection.

Create Live Data Connection to HANA

Next you’ll get a pop-up screen and be asked to fill in the relevant details:

Completing the Live Connect Information

Continue to scroll down:

  • SAP CP Account – Enter your account name.  If it is a trial account then you will include the “trial” at the end of the name.  (We went over this in more detail when we showed setting up the HANA account in this earlier blog.
  • Database Name – This is the name of your HANA database within your SAP Cloud Platform environment.
  • Landscape Host – If you are using a Trial account then it will be Trial, otherwise you need to pick the correct data center.
  • User Name – This is the username for your HANA database within the SAP Cloud Platform environment.
  • Password – This is the password for your HANA database within the SAP Cloud Platform environment.

… so fill out the bottom section as well.:

Completing the Live Connect Information

Once you click okay, the connection will validate what you’ve filled in by connecting to the HANA database.  If you are prompted again for a username and password in a pop-up window, either your HANA database is not currently running or you are using the incorrect user name and password.

This Live Connection is now added to our landscape:

List of all current connections

Next we want to connect to our CV_INSURANCESALES view within the SAP Cloud Platform.  To do this we need to create a model.  Choose Create Model from the menu:

Navigate to Create > Model

From this menu we will select the live data source:

Select a datasource and choose Live data connection.

and then choose the data source from our HANA database in the SAP Cloud Platform.

Select CV_INSURANCESALES as our Model

You will also want to name your model.  In my case I used the name Insurance.  Once you click OK, the system will open up in the Modeler view with the list of measures.  This may appear confusing if you’ve not seen it before.  Because the SAP Analytics Cloud has a Budgeting & Planning framework at it’s core, within a mandatory “Accounts” dimension is where the measures are stored.  (This allows for versions of measures which is critical for Budget & Planning scenarios.)

If you want to change some of the measure descriptions or aggregation methods you can do it here by updating the values directly in the grid.  Once any changes you want to make are complete, click the Save icon on the menu.

Edit the measures as necessary and click Save…

and your changes will be saved:

Changes Saved Successfully.

All the foundational elements are in place.

  1. Our data has been loaded into the SAP Cloud Platform.
  2. We have establish a connection between the SAP Analytics Cloud and our data in the SAP Cloud Platform.
  3. We have created a model on top of our data.

So now is the fun part.  Now that everything is connected we can begin to slide and dice the data and get some insights.

Create a Story within the SAP Analytics Cloud

SAP Analytics Cloud allows you to consume interactive Analytic applications from the same interface that you use to create them.  Let’s create our first story by selecting Create > Story from the menu:

Creating a New Story

This will open up a story wizard.  You can selectd from existing template or start with your own blank canvas.  For this exercise let’s start with a blank Canvas by selecting Add a Canvas Page.

Create a blank page for your story

Next we will chose whether we want our page to be used for data exploration (data sources on the left) or for traditional Analytic elements.  In our case let’s select a chart so that we can begin to visualize our data.

Add a chart to your new story

Next, you will choose your data source.  We want to select the model (data set) to connect our chart to.  Let’s select Insurance.

Connect to our existing Insurance model

Next you will want to select Total_Cost as your measure:

Choose the data elements from the chart builder menu

Once you have selected the measure zzzTotal_Cost and Product_Line, the chart should appear like the one below.

Bar Chart with Total Cost by Product Line

The (+) plus button in the Insert menu above is where you can add additional elements to the canvas.  Feel free to play around and get comfortable with the Designer Interface.

Insert menu for adding additional objects

Once we have updated the canvas to meet our needs, we will want to save our story.  Click the save icon and give your story a name.

Save your Insurance Revenues story


Congratulations, you have created a story within SAP Analytics Cloud connected to a Live data source.  I encourage you to play around and get comfortable with the Designer.  One of my biggest challenges was overcoming my habit of constantly using right-click to to look for shortcut options because within legacy BusinessObjects the web interface still fully supported the right-click paradigm.

For more resources about learning to use many more capabilities within SAP Analytics Cloud, please check out these youtube videos.

Here is a story I created using the data we just uploaded.  It look about 10 minutes to create.

Finished Dashboard

I hope you find these blogs helpful in understanding how you can begin to get familiar with managing and analyzing your data within SAP’s cloud offerings.


Mike Flannagan shared his vision around SAP Analytics at Sapphire 2017 yesterday and the cloud was a cornerstone of his message.  This is the primary go forward strategy for Analytics for SAP so we should be expecting significant investment by SAP in the development of this solution over the coming months and years.

This solution is still in it’s infancy but SAP is well on the road to establish a strong offering for cloud-based analytics against any data source.

«Good BI»

Embracing the Cloud for Analytics – Part 2

May 18th, 2017 No comments

In the other two blogs related to this post series are here:

So now that you’ve got an SAP Cloud Platform account, we want to start uploading some data to it.

Although the SAP Cloud Platform provides a good web interface for connecting working with data in the cloud, my personal preference is to use the Eclipse IDE.  In this blog I want to show you how easy it is to set up the eclipse client so that you can access your HANA database in the cloud and start loading some data.

Installing Java and Eclipse for SAP Cloud Platform

I found that this handy video from Dan Grissom which walks through the configuration process for downloading and installing Eclipse on Windows 10.  It will automatically give us the latest versions of all the software.

It uses a website called Ninite which creates a custom installer for the products you wish to install.  This video will show you how to:

  1. Download and install the required Java JDK components.
  2. Download and install Eclipse.

If you would rather install the components manually, I recommend you follow this step by step tutorial within the SCN.  (Below I will pick up step 8 of the tutorial)

Here is a handy wiki page if you have challenges during the installation.

So now that we’ve got the Java and Eclipse components installed, let’s continue.

Configuring Proxy Settings in Eclipse

If you are running within a corporate environment, you may also need to configure proxy settings for Eclipse.  You can change your proxy settings by accessing Windows > Preferences.

This will open up the following Preferences Screen.. Under General > Network Connections you will find these settings.  You can also find it by typing “Network” into the filter/search bar.

Configure Your Manual Connection for your Corporate Proxy

Under Active Provider choose Manual.  Next select the first Schema HTTP and choose edit.  Add in the name of the host and port in the space provided.  Because I work at SAP, my settings are proxy 8080.

Set your Proxy Entries to connect through a corporate firewall

Choose okay and do the same process for HTTPS.

Once complete your settings should look something like this:

Confirm your settings for Manual

Once the proxy is configured you can switch back and forth between Manual (I’m in the office and need proxy) and Direct (I’m outside my proxy environment).  Now we are ready to install the SAP Cloud platform tools into Eclipse.

Installing the SAP Cloud Platform tools for Java

Choose Install New Software:

Help > Install New Software

and copy the following URL

into the software list.

Pull the SAP Cloud Components directly from SAP

From here you can choose to install as much software as you want but we only need the SAP HANA Cloud Platform Tools, so select this option and choose Next.

Select SAP Cloud Platform Tools

After accepting the license agreement, Eclipse will ask you to restart.  We’re almost done.

Download and install the SAP Cloud Platform SDK

Next, we need to download the install the SAP Cloud Platform SDK.  This can be downloaded directly from the SAP Cloud Platform Tools site:

SAP Cloud Platform SDK

The SDK comes in different flavors:

  • Java Web: Provides a lightweight runtime supporting a subset of the standard Java EE APIs (Servlet, JSP, JSTL, EL). Currently there is a 1.x version of this runtime available
  • Java EE 6 Web Profile: Provides certified support for the whole Java EE 6 Web Profile APIs. Currently there is a 2.x version of this runtime available
  • Java Web Tomcat 7: Provides support for some of the standard Java EE APIs (Servlet, JSTL, EL). Currently there is a 2.x version of this runtime available
  • Java Web Tomcat 8: Provides support for some of the standard Java EE APIs (Servlet, JSTL, EL). Currently there is a 3.x version of this runtime available

For our exercise we will select and download Java Web.

After it has been downloaded, place it into the appropriate directory. In my case I created a directory called, C:\dev\ and then extract the contents of the .zip file.  So my directory looks like this:

Extracted SDK

Next we will need to associate this Java Web SDK with Eclipse so the software components installed with Eclipse can access them.  Choose Window > Preferences.

Edit Eclipse Preferences

Choose Server > Runtime Environment. Click the Add… button to open the New Server Runtime dialog.

Select Java Web from the list of Environments


Scroll down and select SAP > Java Web from the list of Server Runtime Environment(s) and click Next.

Provide the folder to which you have extracted the SDK by clicking the Browse… button and choosing the respective folder, e.g. c:\dev\neo-java-web-sdk-1.127.11.

Select the SDK root directory

Click on Finish.

We have now installed the SAP Cloud Platform Tools for Java and are ready to start using Eclipse to connect to HANA within the SAP Cloud Platform.

Connecting to HANA

First, make sure your HANA instance is running in the cloud before you attempt to connect to it.   In my previous blog I shared that your trial HANA account is only allow the database to run for 12 hours before it is shuts down so sometimes you may forget to restart your HANA database when going into Eclipse.  To confirm your database is running, log into the SAP Cloud Platform via the web and check the database status.  Before you begin this step, confirm that you SAP HANA database is running by logging into your SAP Cloud Platform and verifying the status.  If necessary start the database.  (Remember to refresh the screen since the screen does not refresh automatically.)

Once you have confirmed your HANA instance is running, you can continue.

First let’s switch to the HANA Administrative Console Perspective within Eclipse by choosing:  Windows > Perspective > Open Perspective Other.

Select the Administration Perspective

From here we can add a new HANA system to our Admin Console.  Choose Add Cloud System…

Select Add Cloud System…

From here, how we fill out this section will depend on your account.

Landscape Host:  This is the URL of your SAP Cloud System.  To determine the user for your account, it should the root URL less the “account.” at the beginning of the URL.  For example:

  • Trial accounts ( would have a host landscape of
  • US Ashburn data center ( would have a host landscape of

Account Name:  This is the Account name associated with your username and password.  You can see this by logging into your SAP Cloud Platform account and looking at the menu bar.

  • Standard Trial accounts: pXXXXXXXXXX6trial, for example p1942629386trial
  • SAP Employee Trial accounts: iXXXXXXtrial, for example i817400trial
  • Non-Trial accounts can be anything, e.g. presales

Find the Account Name

User name:  This is the name you will use to access the SAP Cloud Platform.  This is typically the same name as the account name but without the “trial” at the end.

  • Standard Trial accounts: pXXXXXXXXXX, for example p1942629386
  • SAP Employee Trial accounts: iXXXXXX, for example i817400
  • Non-Trial accounts this is the name of the SubAccount.  for example b0b8e5ce5. Always check the subaccount information to be sure as in the screenshot below:

From with SAP Cloud Platform you can see these values

Password:  This will be the password you use to log into the SAP Cloud Platform.  If you are an SAP Employee and SSO is used to access the SAP Cloud Platform, you will use your corporate password.

Here is are two examples.  This is what an SAP Employee might see:

Example SAP Employee Setup

This is what other trial accounts look like:

Example Standard Setup

Selecting the HANA Database

Once this information has been entered correctly, you will be prompted to select the database, username and password.

  • Database:  Select the correct HANA database from the drop down.  In my case, my current HANA database is called demo.
  • Username:  This is the username for the HANA database.  This will typically be SYSTEM.
  • Password:  The password you assigned to the HANA database user.  This is typically the password for the SYSTEM user.

Now Enter the Database details

After your database is added, it will be added to the Systems list on the left.  In my case I added three different HANA instances from three different cloud accounts to my Eclipse environment:

Connections to 3 different cloud HANA instances

Loading Data Into HANA

So next, let’s get some data loaded.  One of the easiest ways to import data into HANA is to use a spreadsheet.

If you want to follow along using my dataset, you can access it here, Insurance Data.  Otherwise feel free to use your own data set. I should warn you that the HANA data import is not the most robust data import tool depending on what data formats you are using, etc.

To start the process, choose File > Import:

Select File > Import… from the menu

We want to import file data, so let’s choose Data from Local File from the list.

Scroll down to SAP HANA Content > Local File

Next select the name of the HANA System that you want to use to load the data:

Select the Target System for the data

Choose a local Excel file and load the data and specify the target schema and table name.  Here I am going to create a new table:

Importing the sample Insurance data

Click Next.

You may want to modify some of the data attributes since the import tool is making a best guess based on a sample of the data contained within the file.  For example I rounded up the text length of most of the fields and changes all the Decimal fields to Decimal(10,2).  We can adjust that here.

Here are some of the schema adjustments I made

Click Finish to begin importing the data.  After the load is attempted refresh the view on the left so that the new database you created will appear under Catalog > (Selected Schema) > Tables.  You can do this by right-clicking on the schema and selecting Refresh.

Right-click and choose Refresh

If the load was unsuccessful, then make the appropriate changes, delete the empty table and try the load again.  Most commonly there was a problem with the default schema that HANA used based on the data sample.  The Job Log window at the bottom of the screen will tell you whether or not the data load was successful.

Job Log provides a status

Once the load is complete, preview the data within Eclipse by right-clicking on the table and choosing Open Data Preview.

Here you can see the data you just loaded

We can begin to do some initial data quality checks.  Click on the Analysis table to switch to the Analysis view:

HANA allows you to analyze your data in place

Choose Analysis and drag Product into the Label Axis and Total Profit into the Values Axis.

Products by Revenue

Feel free to play around with the Analysis features.  They are quite robust and allow you to change chart types, add filters.  Here I switched to a bubble chart and updated the view to include  Customers by Customer Discount, Product Discount and Total Profit.

bubble chart

If you want to run some of your own SQL queries you can do that tool by right-clicking on the schema and selecting Open SQL Console from the menu option.  If you have more questions about how to use SQL with HANA, there are a series of great videos available here.


We completed the configuration of the desktop tools which provide us an alternative to the WebIDE experience that is available by default with the SAP Cloud Platform.  Eclipse is a terrific development environment for your Analytics team when it comes to managing and modelling your data within the SAP Cloud Platform.

Next we will want to connect our SAP Analytic tools to our data that resides within HANA and setup the automated ‘refresh’ of data from on-premise systems into the cloud.

«Good BI»




Embracing the Cloud for Analytics and Applications

May 16th, 2017 No comments

The other two posts following this introduction are here:


If you’ve been listening to Sapphire 2017 you couldn’t have missed the new launch of SAP Leonardo and over the last few weeks I’ve been getting my hands on it.

Why Leonardo?

As an Analytics professional I’m keen to create a frictionless environment that will allow me to quickly move my data to the cloud and connect to it using my Analytics Suite.  SAP has long supported the ability to either have SAP host your traditionally on-premise data and applications in the cloud via HEC (HANA Enterprise Cloud), but that’s not what we are taking about here.  I’m looking to move from a traditional on-premise mindset to a cloud-first mindset.  How can I used applications that are already hosted for me in a SaaS offering.

Enter SAP Cloud Platform – the underlying framework of Leonardo.

SAP Cloud Platform is the key strategic platform-as-a-service infrastructure that provides the framework for SAP Leonardo. Additionally, SAP Cloud Platform provides end-to-end micro services for machine learning, analytics, Big Data, security, user experience, user management, and backend integration application program interfaces.

Once I have my data in the cloud I can then connected to to the intelligence services available in SAP Leonardo.  Leonardo includes best-of-breed business services that enable users to rapidly build Internet of Things applications, create reusable application services, and apply predictive algorithms.   From a pure analytics perspective, Leonardo offers a subscription model in which I can extend cloud/on-premise applications, integrate with SAP applications and build new analytics applications.  Although I come from an on-premise perspective, it’s only a matter of time before most of the work we do is done through some type of SaaS offering with the ability to customize and tailor the solution to our needs. Change is hard but we each need to embrace it and the value proposition is there.

Subscription Based Value Proposition

Subscription pricing means you can:

  • Forego hardware and software procurement, annual maintenance fees and upgrades.
  • Acquire only the amount of software needed as opposed to traditional licenses per device.
  • Reduce and/or eliminate IT infrastructure salaries expenses for some positions related to ongoing system maintenance.
  • Implement quickly (No hardware, platform software or application software to install and limited configuration)

"If everything is under control, you're just not going fast enough." - Mario Andretti

So time to start moving!

Getting Started with a Free Account

Over the next few blogs I’ll share with you how to start using the SAP Cloud Platform.  There are a lot of good resources out there already but I will be approaching it from an analytics angle.  How can I as an analytics professional quickly load data up into the cloud and begin doing some visualizations?  The answer is very quickly.

The first thing you want to do is to sign up for an SAP Cloud Platform free trial.  It is super simple and you can do it in 5 easy steps the instructions are right here.

Once you have registered for an account and then activated the account via email you should see the following message:

Figure 1 – Activation Successful

Create Your Data Repository

Let’s jump in and create our first database for our data.

Note that there are existing Cloud Platform tutorials put together by the SAP HANA Academy; however they were recorded in 2014 and over the last three years the interface has changed pretty dramatically so I’ll help guide you through the process after you have signed up for your account.

When you log in for the first time, you will see this welcome screen:

2 – Leonardo Platform Home Page

Above you can see I have highlighted your account name in a red box.  This is how SAP identifies your specific tenant with the Leonardo Cloud.

The first thing we want to do is create a HANA database.  We will be creating a HANA database which has all the power of a regular HANA DB but comes with a trial size of 1 GB.  WARNING: Because this is a trial/developer account, the database will only run for 12 hours before it is shutdown.  Additionally if the database is not used for 14 days, then SAP will send you a couple of emails before they delete it in order to save resources.  Finally, you are only allowed to create 1 HANA DB.

Figure 3 – Click on Databases & Schemas

Next we will click on Persistence to expand the menu, then click on Databases & Schemas.  Next you will see a list of the current databases within your tenant.  The list will be empty.  To create a new database choose New and fill out the details as follows:

  1. Database ID: name of database here it is called “demo”
  2. SYSTEM User Password. (Must be 15 characters long)
  3. Click “Configure User for SHINE” on and enter a user and password (Must be 15 characters long)
  4. Turn the rest of the parameters ON

Figure 4 – Create Database

Click Save.  Next you will see the screen update and the message description will say your Database creation started.  Creation of the database will take about 5-10 minutes.

Navigating the Interface

There is a main menu (1) to navigate up and down the different services. A secondary menu on the left (2) is a contextual menu which changes based on where you are in the main menu.


As you click

Figure 5 – Menu Navigation

up and down the main menu, the contextual menu on the left will change accordingly.

Once the database has been created, we will want to jump up a level by click on the name of our database “demo”.

Figure 6 – Cookie Crumb menu to HANA DB

You should see now that the database has started.  If you are already familiar with HANA, you can see that at the bottom of this interface you can access your HANA Cockpit, Development Workbench.  Go ahead and access both the Cockpit and the Workbench.

Figure 7 – Access the HANA DB via the Administration and Developer Tools

SAP HANA Cockpit

This is where you administer your HANA environment.  You can add/remove users, add/remove permissions, import content, manage changes and many other things.  This is also where you could choose an automated upgrade of your HANA database when a new version comes out.

Log in using the SYSTEM username and password you defined when you created the HANA database.

Figure 8 – Login Screen to HANA

The first time you access the Cockpit, the system will realize that it does not have authorization Cockpit access configured for SYSTEM, so it will add the following permissions to your SYSTEM user automatically:  sap.hana.admin.roles::Administrator, sap.hana.ide.roles::TraceViewer, sap.hana.ide.roles::SecurityAdmin.

Figure 9 – The Cockpit provides access to key administrative controls

SAP HANA Workbench

This is where you manage all your HANA content – the catalog and the editor.

Figure 10 – Workbench WebIDE

Florian Pfeffer does a nice job of explaining the difference.  He describes it this way:

The Catalog View gives a view on the runtime objects of the database artifacts.
The first level you see under this view is the schema to which the runtime
objects are assigned. In case you create e.g. a table using DDL statement
CREATE TABLE you see created runtime object for the table under the schema
which was used in the statement (but the table is not visible in the Content view).

The Editor View gives a view on the design time objects of e.g. database
artifacts (and further static content) stored in the HANA repository
(therefore the objects are also called repository objects). Under the
Content View the design time objects are organized in packages (1 to n
levels). The activation of a design time object for a database object
leads to the creation of the runtime object in the specified database
schema. So if you create e.g. a table via an hdbtable artifact
(e.g. mypackage.test.TestTable.hdbtable) and activate it you can see in
the Content view the design time object "TestTable.hdbtable" in package
"mypackage.test" and in the Catalog view under the schema you defined
in the design object the runtime object "mypackage.test::TestTable".


Who knew installing and creating a new HANA database could be so easy?  No software to install.  Minimal configuration.

Although the overall size of this trial account limits our HANA database to 1 GB, it gives us a good introduction to what it’s like to run HANA in a SaaS landscape.

Next we’ll show how to configure configure the Eclipse IDE to connect to HANA running within the Leonardo Platform in case you’d prefer to interact with your database using a traditional desktop client.

«Good BI»

Migrating BEx-based Information Spaces to BusinessObjects 4.0

August 8th, 2013 3 comments

Sometimes in software it can feel that you take two steps forward and one step back.

One example of this is with BEx-based Explorer Information Spaces when migrating from XI 3.1 to BO 4.0. In 4.0, SAP removed the ability to create information spaces against the legacy universe (unv) and instead required everyone to use the new universe format (unx).

On the surface this lack of support for the legacy universe didn’t seem to really be a problem because BusinessObjects 4.0 provides a painless way to migrate universes from the legacy unv to unx format.

What I didn’t realize initially until a few months ago was that this WAS be a big problem.  A big problem for customers who were creating Information Spaces based on BEx queries and Infocubes.

I’d like to share with you my journey to understand what is really required to move my Bex-based Information Spaces to BusinessObjects v4.0.

Explorer, BEx and Unx

Explorer is NOT an OLAP-aware product, therefore is doesn’t understand hierarchies, so in XI 3.1 the unv would flatten the hierarchies for you can generate a flattened hierarchy as L00, L01, L02, L03, etc. There are some advantages to this approach, but there are obvious drawbacks as well.

With BusinessObjects v4.0, SAP rolled out the idea of a transient universe, such that if you wanted to create a WebIntelligence report you didn’t have to create a universe first. WebIntelligence would create a universe on the fly and a hierarchy is treated as a single column with the ability to use +/- to expand and collapse. (You can read more about WebIntelligence and BICS connectivity here.)

If  you try and convert an XI 3.1 universe based on a BEx query to a unx, it gives you the following error:

Now What?

The only 2 options I came up with to overcome this challenge were:

  • Use WebIntelligence (unv) to generate an Excel file and have Explorer index the xls file.
  • Leverage a Multi-Source relational connection to  connect to the the BEx cube and hierarchy relationally

Approach 1 – Excel Output

The approach I took here was to use the legacy unv file to create a WebI report.  I would then schedule this WebI report and generate an Excel file.  The Excel file would overwrite to ‘source’ Excel file of the Information Space.

To set this up, I created the WebI report with no header and a table that starts at position (0,0).  This table will easily export to Excel.

Sample WebI output prepped for Excel

Next, export the results to Excel 2007 format (xlsx)

Resulting WebI output in Excel

I then uploaded the xls file the BI Launchpad and used it as a source for my Information Space.


Once Explorer was able to generate the information space the way I wanted it, I was read to schedule the WebIntelligence report to Excel output.

Information Space based off an Excel file

Now, look at the properties of the Excel file, because we will need this when scheduling our WebIntelligence report.

Find the FRS location of the Excel File

I can now schedule WebIntelligence to run on a schedule and since I know the physical location of the Excel file in the FRS.

In my case the directory is: D:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\FileStore\Input\a_053\035\001\74549\   and the file name is:  work order excel-guid[e93a2669-0847-4cd2-b87d-390beb1c956c1].xlsx.

I can use that file name in the WebIntelligence scheduler and write over the source xlsx file for my information space.

When scheduling the file make sure and select the following:

  • Recurrence – chose the frequency that makes sense for you.
  • Formats – chose Excel.   (The default is always WebI.)
  • Destinations – chose File System.  Do NOT keep an instance in the history.  Use the xlsx directory and filename.

Select the correct parameters in the recurring schedule

As a last step, I would then schedule Explorer to reindex the Information Space after the file was changed and all will be good.

Scheduling Indexing for Explorer

Now the information space should be generated and everything should work just fine.

I must say that at first I really struggled to get this technique to work.  I’m not sure if my WebI output file was too large or if there was some other type of problem.  I spent hours (and hours and hours) trying to troubleshoot why this wasn’t working.  After speaking with technical support and having them attempt the same process, we thought that there was some type of incompatibility between the Excel files generated by WebIntelligence and the Excel format required by the Information Space for indexing.

There were a couple of different errors I encountered.  A few times I forgot to specify Excel as the output type so I got this error:

The only way I was able to fix the error was to restart my Explorer services and reschedule the WebI report and make sure it was exporting in Excel format.

Another error I got was when I didn’t properly exit the information space before attempting to reindex it.  In that case I would get an indexing error and see this when opened up the information space through “Manage Spaces”.

I found that this method of solving the issues was somewhat flaky.  Sometimes I found that after WebIntelligence overwrote the Excel file, Explorer is not longer able to index it.  It’ was very frustrating and technical support was only able to provide limited support because this is not the recommended solution for the problem.

So what did SAP recommend?  They suggested a much less elegant but more robust and fully supported approach — a multi-source universe.

Approach 2 – Multi-source Solution

This solution is less straightforward, but I was able to get it working and SAP says that this is the only solution that’s officially supported.

There are three things we need to do:

  1. Generate the flattened hierarchy lists and load them into another database (e.g. SQL Server)
  2. Read the SAP Note 1656905 about creating a unx universe from a BW Infocube
  3. Link the two systems via a multi-source connection

In order to generate the flattened hierarchy, you must use the legacy unv universe against your Infocube.  The ability to flatten a hierarchy is not available in a unx universe.  (SAP says that BICS is not there to flatten the hierarchy and there are no plans to enable it because then it’s no longer a hierarchy.  Bummer.)

Next, create a WebIntelligence report based on a legacy unv universe.  Add all levels of the hierarchy to the report and run the report.  Once you have the report, export the results to an Excel file and load them into a relational database.

I created a table called: tblPCHier

Flattened BW Hiearchy in SQL Server

Next, I imported the Excel output into my new database table:

BW Hierarchy is loaded in SQL Server

Note:  You need to watch out for accidental duplicate names a lower levels of the hierarchy.  Because WebIntelligence will automatically try and aggregate multiple values, you need to be aware that if the child nodes have the same name but a different parent value, the child nodes will roll up and display and aggregated value within Explorer.  If this is not what you want, then you will want to make sure that the child node names are unique.

Next we are ready to go into the IDT (Information Design Tool) and create our multi-source universe.  Follow the instructions listed in the SAP Note 1656905 to understand how to create a unx on top of a BW Infocube.

Once our BW star schema has been added to our data foundation, we can add another connection to our other datasource, the relational database, so we can bring in our hierarchy.

Lastly, join the column from the BEx fact table (SAP) to the key of my hierarchy table (SQL Server).

When our multi-source universe is complete we should see a connection to SAP, a connection to a relational database, a data foundation and a universe.

Completed unx components

Here is a preview of my hierarchy table from within the IDT:

View of flattened Hierarchy

So now we just need to verify that everything we need is in the universe.  The big challenge being that not everything from BEx is available in a unx.  Here are some of the things we lose when we go to a relational universe:

  • Calculated Key Figures
  • Restricted Key Figures
  • Variables
  • Conditions
  • Unit / Currency Conversion
  • Hierarchies (which we know about)
  • Custom Structures
  • Time Dependent Objects

I suggest you commit this list to memory.

In one case I had over 50 calculated key figures that I needed to map into Explorer and therefore recreating the logic in SQL was difficult and tedious.

In that case I had measures that included some time dependent calculations:

  • Total AR
  • Current AR, Over 30 days, Over 60 days, Over 90 days, Over 120 days
  • Current Debit, Debit over 30, Debit over 60,  Debit over 90,  Debit over 120
  • Current Credit, Credit over 30, Credit over 60 and Credit over 120

In BEx, I had implemented exit variables to do dynamic time calculations.  Now I need to do the same thing for Explorer.

To accomplish this, I built SQL Server Views which dynamically calculated values such as Last Day Previous Month and Last Day Previous Month Previous Year.  I could then use these dynamic calculates in my universe.

Equivalent userexit logic in SQL Server

Although I included these views in the data model, I did not need to join them to anything.

These views were simply used to dynamically generate a date value which was used to restrict the queries to the correct data elements.

Here is a look at the measures that were created in the universe (click on the image to enlarge):

Measures within the Universe

Here is a screenshot of the WHERE logic for “Total AR, Last Month”:

WHERE Logic for restricted key figure

Here is some of the logic spelled out.

WHERE logic for “Total AR” that leverages curDate()

@Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Posting date in the document) <= curDate()
(Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) > curDate()
Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) < {d ‘1900-01-01’}

WHERE logic for “Total AR, Last Month” that leveages Last Day Prev Month view

@Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Posting date in the document) <= @Select(SQL\V Last Day Prev Month\Last Day Prev Month)
(Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) <= @Select(SQL\V Last Day Prev Month\Last Day Prev Month)
Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) < {d ‘1900-01-01’}

If you have to do a lot of this type of time-based calculation logic, you might also want to review some of my previous blogs on the topic.  You don’t necessarily have to create views in the database to do the time calculations:


This method of implementation is not for the faint hearted.  I can potentially mean a lot of work.

I would like to highlight some important considerations:

  • If your hierarchies are change on a regular basis, you will need to automate the updating of the SQL Server table which contains the hierarchy.
  • If you have a lot of calculated key figure which will need to be recreated within SQL.
  • Any logic you built into variables or user exits may need to be recreated within SQL.

Given these constraints it’s hard for me to endorse converting all your Explorer information spaces to BusinessObjects v4.0 without first understanding the complexity of your Infocubes. The good news is that SAP BusinessObjects v4.1 will overcome these limitations.

Try the Excel method first.  It’s worth a shot.

 BusinessObjects v4.1 to the Rescue

Recently in a What’s New in Explorer 4.1  ASUG call, SAP announced that in BusinessObjects 4.1, unv files will be supported.  This is great news.  Of course that begs the question.  If unx is the way forward, how will we flatten our SAP hierarchies in the future?

SAP BusinessObjects 4.1 is currently in ramp-up and the latest information on Service Marketplace says that it is targeted to exit ramp-up on November 10, 2013.  As always, this date is subject to change.

On additional piece of advice, if you are curious to learn about future releases and maintenance schedules, I suggest you check out this site on Service Marketplace: Although these days are only planned dates, they are useful to have when planning system maintenance and upgrades.

Hope you’ve found this helpful.

«Good BI»

Categories: Lumira Tags: , , ,

Dealing with Small Numbers in Explorer

January 2nd, 2013 1 comment

Recently I’ve been spending a lot of time with Explorer and I’ve made a few interesting discoveries that I wanted to pass along.

I’m not sure why some of my BI colleagues haven’t been giving Explorer any love (see the DSLayer Explorer Gets No Love podcast), but I think it’s a phenomenal product… but with any product there are always little quirks to deal with.  Some people call them bugs… I call them quirks.  🙂

Truncation Frustration

I spent a good part of a day experiencing truncation frustration.

I was trying to determine how to display a capacitance value within Explorer.  Capacitance as you may know can be a extremely large or an extremely small number.  It took me multiple attempts to figure out how to get it to work but I finally cracked it!

What Didn’t Work

Capacitance was stored in the database as a float, which worked great when I used it as a measure — but when I displayed it as a facet value, suddenly Explorer began to get it wrong.  Here are both the facet and the measure for capacitance displayed side by side.  It seemed only able to display up to three decimal places.

See truncated values on the left and the correct measure value on the right.

Here is how I initially had the capacitance defined as a float.

Normally the first thing to do with numbers is leverage the format value within the semantic layer and I tried everything but somehow, if the capacitance was stored as a number, Explorer would consistently drop the significant values to the right of the decimal place.  Here is how I defined the custom format for the capacitance.  I tried both # and 0’s but neither worked.

The IDT appears to show the number correctly when I preview it…

… and yet it still appeared incorrectly within Explorer.  What was going on?  At this point it’s pretty clear that this bug quirk should be fixed, but I needed to get it working asap.  I needed a work around.

I know I was going to have to convert the capacitance to a string.  The added benefit of this was that now capacitance would also be full searchable as a string.

I tried multiple formulas to try and do the conversion to a string.

None of these worked successfully:

  • cast (Capacitors.Capacitance as varchar(18)) returns scientific notation – Yuck.
  • Str (Capacitors.Capacitance,10,8)
  • charindex([^0],reverse(Str (Capacitors.Capacitance,10,8)))

What Did Work

The problem now was that regardless of how many decimals of precision the capacitance had, there were always 8 trailing zeros and I desperately wanted to get rid of these, so finally I found the magic formula:

CASE WHEN PATINDEX(‘%[1-9]%’, REVERSE(Str (Capacitors.Capacitance,10,8))) < PATINDEX(‘%.%’, REVERSE(Str (Capacitors.Capacitance,10,8))) THEN LEFT(Str (Capacitors.Capacitance,10,8), LEN(Str (Capacitors.Capacitance,10,8)) – PATINDEX(‘%[1-9]%’, REVERSE(Str (Capacitors.Capacitance,10,8))) + 1) ELSE LEFT(Str (Capacitors.Capacitance,10,8), LEN(Str (Capacitors.Capacitance,10,8)) – PATINDEX(‘%.%’, REVERSE(Str (Capacitors.Capacitance,10,8)))) END

Special Thanks to SwePeso

After beating my head against a wall, the sense of achievement and satisfaction were extremely rewarding and reminded me of why I love my job so much.  I got what I wanted.

It was great.  The user can search for 0027 capacitance and will get the appropriate match (more on that next week).  Also, you can also observe that all the capacitance values show up in the correct sort order, which means when I display capacitance is order from smallest to largest they are sorted in the correct order.

Capacitance Graph displays properly


As Explorer continues to mature, it’s my hope that more and more of these quirks will be addressed by the product team and more of the product will work as expected — becoming a fully fledged, first-class citizen of the semantic layer.

I’d also would like to see hyperlink support within Explorer.  I think this is long overdue.  Please help me vote YES, thumbs up for this feature >>

«Good BI»