Archive

Posts Tagged ‘How To’

Migrating BEx-based Information Spaces to BusinessObjects 4.0

August 8th, 2013 3 comments

Sometimes in software it can feel that you take two steps forward and one step back.

One example of this is with BEx-based Explorer Information Spaces when migrating from XI 3.1 to BO 4.0. In 4.0, SAP removed the ability to create information spaces against the legacy universe (unv) and instead required everyone to use the new universe format (unx).

On the surface this lack of support for the legacy universe didn’t seem to really be a problem because BusinessObjects 4.0 provides a painless way to migrate universes from the legacy unv to unx format.

What I didn’t realize initially until a few months ago was that this WAS be a big problem.  A big problem for customers who were creating Information Spaces based on BEx queries and Infocubes.

I’d like to share with you my journey to understand what is really required to move my Bex-based Information Spaces to BusinessObjects v4.0.

Explorer, BEx and Unx

Explorer is NOT an OLAP-aware product, therefore is doesn’t understand hierarchies, so in XI 3.1 the unv would flatten the hierarchies for you can generate a flattened hierarchy as L00, L01, L02, L03, etc. There are some advantages to this approach, but there are obvious drawbacks as well.

With BusinessObjects v4.0, SAP rolled out the idea of a transient universe, such that if you wanted to create a WebIntelligence report you didn’t have to create a universe first. WebIntelligence would create a universe on the fly and a hierarchy is treated as a single column with the ability to use +/- to expand and collapse. (You can read more about WebIntelligence and BICS connectivity here.)

If  you try and convert an XI 3.1 universe based on a BEx query to a unx, it gives you the following error:

Now What?

The only 2 options I came up with to overcome this challenge were:

  • Use WebIntelligence (unv) to generate an Excel file and have Explorer index the xls file.
  • Leverage a Multi-Source relational connection to  connect to the the BEx cube and hierarchy relationally

Approach 1 – Excel Output

The approach I took here was to use the legacy unv file to create a WebI report.  I would then schedule this WebI report and generate an Excel file.  The Excel file would overwrite to ‘source’ Excel file of the Information Space.

To set this up, I created the WebI report with no header and a table that starts at position (0,0).  This table will easily export to Excel.

Sample WebI output prepped for Excel

Next, export the results to Excel 2007 format (xlsx)

Resulting WebI output in Excel

I then uploaded the xls file the BI Launchpad and used it as a source for my Information Space.

 

Once Explorer was able to generate the information space the way I wanted it, I was read to schedule the WebIntelligence report to Excel output.

Information Space based off an Excel file

Now, look at the properties of the Excel file, because we will need this when scheduling our WebIntelligence report.

Find the FRS location of the Excel File

I can now schedule WebIntelligence to run on a schedule and since I know the physical location of the Excel file in the FRS.

In my case the directory is: D:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\FileStore\Input\a_053\035\001\74549\   and the file name is:  work order excel-guid[e93a2669-0847-4cd2-b87d-390beb1c956c1].xlsx.

I can use that file name in the WebIntelligence scheduler and write over the source xlsx file for my information space.

When scheduling the file make sure and select the following:

  • Recurrence – chose the frequency that makes sense for you.
  • Formats – chose Excel.   (The default is always WebI.)
  • Destinations – chose File System.  Do NOT keep an instance in the history.  Use the xlsx directory and filename.

Select the correct parameters in the recurring schedule

As a last step, I would then schedule Explorer to reindex the Information Space after the file was changed and all will be good.

Scheduling Indexing for Explorer

Now the information space should be generated and everything should work just fine.

I must say that at first I really struggled to get this technique to work.  I’m not sure if my WebI output file was too large or if there was some other type of problem.  I spent hours (and hours and hours) trying to troubleshoot why this wasn’t working.  After speaking with technical support and having them attempt the same process, we thought that there was some type of incompatibility between the Excel files generated by WebIntelligence and the Excel format required by the Information Space for indexing.

There were a couple of different errors I encountered.  A few times I forgot to specify Excel as the output type so I got this error:

The only way I was able to fix the error was to restart my Explorer services and reschedule the WebI report and make sure it was exporting in Excel format.

Another error I got was when I didn’t properly exit the information space before attempting to reindex it.  In that case I would get an indexing error and see this when opened up the information space through “Manage Spaces”.

I found that this method of solving the issues was somewhat flaky.  Sometimes I found that after WebIntelligence overwrote the Excel file, Explorer is not longer able to index it.  It’ was very frustrating and technical support was only able to provide limited support because this is not the recommended solution for the problem.

So what did SAP recommend?  They suggested a much less elegant but more robust and fully supported approach — a multi-source universe.

Approach 2 – Multi-source Solution

This solution is less straightforward, but I was able to get it working and SAP says that this is the only solution that’s officially supported.

There are three things we need to do:

  1. Generate the flattened hierarchy lists and load them into another database (e.g. SQL Server)
  2. Read the SAP Note 1656905 about creating a unx universe from a BW Infocube
  3. Link the two systems via a multi-source connection

In order to generate the flattened hierarchy, you must use the legacy unv universe against your Infocube.  The ability to flatten a hierarchy is not available in a unx universe.  (SAP says that BICS is not there to flatten the hierarchy and there are no plans to enable it because then it’s no longer a hierarchy.  Bummer.)

Next, create a WebIntelligence report based on a legacy unv universe.  Add all levels of the hierarchy to the report and run the report.  Once you have the report, export the results to an Excel file and load them into a relational database.

I created a table called: tblPCHier

Flattened BW Hiearchy in SQL Server

Next, I imported the Excel output into my new database table:

BW Hierarchy is loaded in SQL Server

Note:  You need to watch out for accidental duplicate names a lower levels of the hierarchy.  Because WebIntelligence will automatically try and aggregate multiple values, you need to be aware that if the child nodes have the same name but a different parent value, the child nodes will roll up and display and aggregated value within Explorer.  If this is not what you want, then you will want to make sure that the child node names are unique.

Next we are ready to go into the IDT (Information Design Tool) and create our multi-source universe.  Follow the instructions listed in the SAP Note 1656905 to understand how to create a unx on top of a BW Infocube.

Once our BW star schema has been added to our data foundation, we can add another connection to our other datasource, the relational database, so we can bring in our hierarchy.

Lastly, join the column from the BEx fact table (SAP) to the key of my hierarchy table (SQL Server).

When our multi-source universe is complete we should see a connection to SAP, a connection to a relational database, a data foundation and a universe.

Completed unx components

Here is a preview of my hierarchy table from within the IDT:

View of flattened Hierarchy

So now we just need to verify that everything we need is in the universe.  The big challenge being that not everything from BEx is available in a unx.  Here are some of the things we lose when we go to a relational universe:

  • Calculated Key Figures
  • Restricted Key Figures
  • Variables
  • Conditions
  • Unit / Currency Conversion
  • Hierarchies (which we know about)
  • Custom Structures
  • Time Dependent Objects

I suggest you commit this list to memory.

In one case I had over 50 calculated key figures that I needed to map into Explorer and therefore recreating the logic in SQL was difficult and tedious.

In that case I had measures that included some time dependent calculations:

  • Total AR
  • Current AR, Over 30 days, Over 60 days, Over 90 days, Over 120 days
  • Current Debit, Debit over 30, Debit over 60,  Debit over 90,  Debit over 120
  • Current Credit, Credit over 30, Credit over 60 and Credit over 120

In BEx, I had implemented exit variables to do dynamic time calculations.  Now I need to do the same thing for Explorer.

To accomplish this, I built SQL Server Views which dynamically calculated values such as Last Day Previous Month and Last Day Previous Month Previous Year.  I could then use these dynamic calculates in my universe.

Equivalent userexit logic in SQL Server

Although I included these views in the data model, I did not need to join them to anything.

These views were simply used to dynamically generate a date value which was used to restrict the queries to the correct data elements.

Here is a look at the measures that were created in the universe (click on the image to enlarge):

Measures within the Universe

Here is a screenshot of the WHERE logic for “Total AR, Last Month”:

WHERE Logic for restricted key figure

Here is some of the logic spelled out.

WHERE logic for “Total AR” that leverages curDate()

@Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Posting date in the document) <= curDate()
OR
(Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) > curDate()
OR
Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) < {d ‘1900-01-01’}
)

WHERE logic for “Total AR, Last Month” that leveages Last Day Prev Month view

@Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Posting date in the document) <= @Select(SQL\V Last Day Prev Month\Last Day Prev Month)
OR
(Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) <= @Select(SQL\V Last Day Prev Month\Last Day Prev Month)
OR
Select (ZFIAR_C01\Clearing/Reverse/Posting Dates\Clearing date) < {d ‘1900-01-01’}
)

If you have to do a lot of this type of time-based calculation logic, you might also want to review some of my previous blogs on the topic.  You don’t necessarily have to create views in the database to do the time calculations:
http://trustedbi.com/2009/12/29/timeseries1/
http://trustedbi.com/2010/01/06/timeseries2/
http://trustedbi.com/2010/01/18/timeseries3/

Cavet

This method of implementation is not for the faint hearted.  I can potentially mean a lot of work.

I would like to highlight some important considerations:

  • If your hierarchies are change on a regular basis, you will need to automate the updating of the SQL Server table which contains the hierarchy.
  • If you have a lot of calculated key figure which will need to be recreated within SQL.
  • Any logic you built into variables or user exits may need to be recreated within SQL.

Given these constraints it’s hard for me to endorse converting all your Explorer information spaces to BusinessObjects v4.0 without first understanding the complexity of your Infocubes. The good news is that SAP BusinessObjects v4.1 will overcome these limitations.

Try the Excel method first.  It’s worth a shot.

 BusinessObjects v4.1 to the Rescue

Recently in a What’s New in Explorer 4.1  ASUG call, SAP announced that in BusinessObjects 4.1, unv files will be supported.  This is great news.  Of course that begs the question.  If unx is the way forward, how will we flatten our SAP hierarchies in the future?

SAP BusinessObjects 4.1 is currently in ramp-up and the latest information on Service Marketplace says that it is targeted to exit ramp-up on November 10, 2013.  As always, this date is subject to change.

On additional piece of advice, if you are curious to learn about future releases and maintenance schedules, I suggest you check out this site on Service Marketplace: https://websmp108.sap-ag.de/bosap-maintenance-schedule Although these days are only planned dates, they are useful to have when planning system maintenance and upgrades.

Hope you’ve found this helpful.

«Good BI»

Categories: Lumira Tags: , , ,

Dealing with Small Numbers in Explorer

January 2nd, 2013 1 comment

Recently I’ve been spending a lot of time with Explorer and I’ve made a few interesting discoveries that I wanted to pass along.

I’m not sure why some of my BI colleagues haven’t been giving Explorer any love (see the DSLayer Explorer Gets No Love podcast), but I think it’s a phenomenal product… but with any product there are always little quirks to deal with.  Some people call them bugs… I call them quirks.  🙂

Truncation Frustration

I spent a good part of a day experiencing truncation frustration.

I was trying to determine how to display a capacitance value within Explorer.  Capacitance as you may know can be a extremely large or an extremely small number.  It took me multiple attempts to figure out how to get it to work but I finally cracked it!

What Didn’t Work

Capacitance was stored in the database as a float, which worked great when I used it as a measure — but when I displayed it as a facet value, suddenly Explorer began to get it wrong.  Here are both the facet and the measure for capacitance displayed side by side.  It seemed only able to display up to three decimal places.

See truncated values on the left and the correct measure value on the right.

Here is how I initially had the capacitance defined as a float.

Normally the first thing to do with numbers is leverage the format value within the semantic layer and I tried everything but somehow, if the capacitance was stored as a number, Explorer would consistently drop the significant values to the right of the decimal place.  Here is how I defined the custom format for the capacitance.  I tried both # and 0’s but neither worked.

The IDT appears to show the number correctly when I preview it…

… and yet it still appeared incorrectly within Explorer.  What was going on?  At this point it’s pretty clear that this bug quirk should be fixed, but I needed to get it working asap.  I needed a work around.

I know I was going to have to convert the capacitance to a string.  The added benefit of this was that now capacitance would also be full searchable as a string.

I tried multiple formulas to try and do the conversion to a string.

None of these worked successfully:

  • cast (Capacitors.Capacitance as varchar(18)) returns scientific notation – Yuck.
  • Str (Capacitors.Capacitance,10,8)
  • charindex([^0],reverse(Str (Capacitors.Capacitance,10,8)))

What Did Work

The problem now was that regardless of how many decimals of precision the capacitance had, there were always 8 trailing zeros and I desperately wanted to get rid of these, so finally I found the magic formula:

CASE WHEN PATINDEX(‘%[1-9]%’, REVERSE(Str (Capacitors.Capacitance,10,8))) < PATINDEX(‘%.%’, REVERSE(Str (Capacitors.Capacitance,10,8))) THEN LEFT(Str (Capacitors.Capacitance,10,8), LEN(Str (Capacitors.Capacitance,10,8)) – PATINDEX(‘%[1-9]%’, REVERSE(Str (Capacitors.Capacitance,10,8))) + 1) ELSE LEFT(Str (Capacitors.Capacitance,10,8), LEN(Str (Capacitors.Capacitance,10,8)) – PATINDEX(‘%.%’, REVERSE(Str (Capacitors.Capacitance,10,8)))) END

Special Thanks to SwePeso

After beating my head against a wall, the sense of achievement and satisfaction were extremely rewarding and reminded me of why I love my job so much.  I got what I wanted.

It was great.  The user can search for 0027 capacitance and will get the appropriate match (more on that next week).  Also, you can also observe that all the capacitance values show up in the correct sort order, which means when I display capacitance is order from smallest to largest they are sorted in the correct order.

Capacitance Graph displays properly

Conclusion

As Explorer continues to mature, it’s my hope that more and more of these quirks will be addressed by the product team and more of the product will work as expected — becoming a fully fledged, first-class citizen of the semantic layer.

I’d also would like to see hyperlink support within Explorer.  I think this is long overdue.  Please help me vote YES, thumbs up for this feature >> http://bit.ly/SzSSo0

«Good BI»

 

BusinessObjects BI Decision Tree

May 21st, 2012 1 comment

UPDATE 10/5/2012:  Things have continued to evolve since May and therefore this post has been updated based on the latest roadmap information coming from SAP together with, as always, some of my own thoughts and opinions.

The good news, bad news about the SAP BusinessObjects product suite is that although there is a lot of best-of-breed functionality there, it can sometimes be a challenge to know what tool to use in every situation.

As a result, I did some research, leveraged some pre-existing content from SDN and came out with this updated BI Decision Tree.

BI Decision Tree

I have updated this decision tree to include the two recent product announcements

  • SAP Visual Intelligence
  • SAP Predictive Analysis

This chart is not meant to be a definitive guide to selecting the right tool because there are always additional factors to consider, but by and large this will get you there most of the time.

Click on Chart to Enlarge

10/5/2012 CLARIFICATION:  If you are doing Business Intelligence of SAP BW, you should be always look at using Analysis for Office for OLAP Analysis within Excel and Analysis for OLAP for OLAP analysis over the web.  These solutions are premium alternatives to the legacy BEx Analyzer for Excel and BEx Web respectively.  Personally I prefer Analysis for Office for all my BEx Analysis just because I prefer the performance and interface of Excel to the one delivered on the web.

Analysis for Application Design (Code named Zen) is still under development and will be the premium alternative for Web Application Designer.  Here is the official SAP SOD for dashboarding.  So glad Miko pushed for this webinar!

WebIntelligence Rocks

Since I discovered Business Intelligence using Crystal Reports and consider it my “first love”, this admission hurts.  I had this blog 90% written when it hit me.  Does anyone even use Analysis for OLAP?  Why put it on the chart? Everyone uses WebIntelligence for connecting to OLAP data.

Today,WebIntelligence provides OLAP connectivity through the semantic layer and the WebIntelligence user interface is OLAP aware with a grown up OLAP look & feel.  It can feel like a native OLAP tool instead of a relational tool that just flattens OLAP data.

10/5/2012: WebIntelligence is not a native OLAP tool so there are limitations.  If you are using SAP BW, only Analysis for Office and Analysis for OLAP are native OLAP tools and will give the full richness of an OLAP experience.  Some capabilities that are not supported by WebIntelligence are:

  • The ability switch hierarchies without “refreshing” the report
  • Ranking data at a given hierarchy level

The End of Analysis for OLAP?

SAP customers that I’ve worked with are using WebIntelligence to do their formatted reporting and Analysis for Office to keep their finance users happy.  I see some Crystal Reports, but WebIntelligence came a long way in BusinessObjects v4.0 with formatted reporting.

Did you see any Analysis for OLAP sessions at Sapphire this year?  I didn’t think so.  And typically session content is driven by customer interest.

It’s a lonely time for Analysis for OLAP (a.k.a. OLAP Intelligence, OLAP Analysis, Seagate Analysis – boy I’m feeling old).

10/5/2012: I’ve received some feedback from folks saying that Analysis for OLAP is alive and well and has parity functionality with Analysis for Office.  Well, it’s not quite parity when it comes to all the shortcuts and right-clicks, but if you can’t use Analysis for Office, then it does get the job done.  

Your Thoughts

Please let me know your thoughts on this topic.

Do you find it pretty easy to help customers know what tools to use when?

«Good BI»

 

Sizing Up HANA

May 10th, 2012 2 comments

HANA is here, so I wanted to take a minute a field a few common questions about HANA licensing.

The software stack comes with the hardware provided by the hardware partners, whereas the license has to be obtained from SAP.  SAP HANA is sold in 64GB units.  Although the product is non-discountable, it does come in tiered pricing which means the more you buy the cheaper the units are.

One other interesting caveat with the hardware is that you do not necessarily have to license the entire appliance, so if you want to purchase a large box today, you are required to license at least half the addressable memory.  In other words, if the box is 1TB, you must license between 512GB and 1024 GB (or 8 to 16 units).

In addition to the software license associated with the appliance, SAP HANA also requires SAP Named Users.

For information on available hardware appliances, check out:  http://help.sap.com/hana_appliance/

HANA Editions

HANA is currently available in 3 flavors.

HANA Platform Edition -This is the basic edition.  This contains the software stack needed to use SAP HANA as a database, including the SAP HANA database, SAP HANA Studio for data modeling and administration, the SAP HANA clients, and software infrastructure components.  This is primarily for customers who already have Data Integrator licenses.

HANA BW Edition – This new edition of HANA is for existing SAP BW customers who want to continue doing all their data warehousing within the BW modeling environment.  In other words, you can do whatever you can do today with standalone BW, except that now BW is running on HANA.

Note that this edition does not allow the combination of both SAP and non-SAP data using a transient Info Provider.  That would require the Enterprise Edition.

Example of a Transient Data Provider

HANA Enterprise Edition – This edition extends the HANA Platform Edition with the software licenses needed for customers who want a single solution to import in data from SAP and non-SAP sources.  This edition includes SAP LT replication or ETL-based replication.  The ETL-based replication is provided by SAP BusinessObjects Data Services and the license only allows for the movement of data from external data sources into SAP HANA.  This also includes data distribution rights similar to Open Hub.

**UPDATE 2/1/2013:  There was HANA Enterprise EXTENDED Edition.  This edition has been discontinued.  This edition extended  the HANA Enterprise Edition with the software licenses needed for customers who need replication server to replicate non-SAP data into HANA from DB2.

Let’s drill into the requirements for moving to BW on HANA in more detail.

BW on HANA

AKA:  SAP Netweaver Business Warehouse 7.3, powered by HANA.

This product went GA a month early.  SAP reported a great response from the ramp-up process and news that everything went extremely well was welcome news.  As a result, I’ve seen many BW customers asking about what is required to move to SAP HANA.

The good news is it’s pretty easy.  The migration process is pretty straight-forward.  Assuming you are already on BW 7.3, it requires about the same amount of work as migration from Oracle to DB2.

Here are some details on the minimum requirements for running BW on HANA.

  • The BW environment must be running BW 7.3
  • BW must be running Unicode
  • BW must have updated analysis authorizations
  • SAP HANA requires a split ABAP/Java stack.  If you are running the dual stack on a single server, they must be broken apart.  This was recommended in BW 7.0 but now it’s a requirement.  In some cases I’ve seen customers who had them both installed but were only using the ABAP stack.  Make sure you need both.

How Much HANA Do I Need?

If you need to do sizing for a BW environment, it’s pretty straightforward.  SAP Note 1637145 walks you through the process.  Use the link below to access it:
https://websmp230.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=1637145

NOTE:  The  HANA sizing script that runs against BW assumes that Unicode is not currently configured.  Unless there are lots of text fields, the impact of already having unicode enabled will be negligible.

For non-BW environments (including SAP ERP) you should leverage this note:
https://websmp230.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=1514966

Here is a final note on HANA Sizing:
https://websmp230.sap-ag.de/sap%28bD1lbiZjPTAwMQ==%29/bc/bsp/spn/sapnotes/index2.htm?numm=1609322

The sizing of your HANA environment will depend on the type of data compression you can get from the HANA platform.  Typically organizations are seeing an average of 7x compression, which is pretty good.  Depending on the type of data it can be even larger.  Most of the SAP Notes use a very conservative 5x compress.

#1 Thing to Remember When Sizing

The only thing to keep in mind is that when you are sizing HANA for every 1GB of storage space, you also need 1GB of working space.  Therefore if you have 500 GB of data, you will need an additional 500 GB of working space, therefore you will need to license a 1TB HANA appliance, therefore when doing the HANA sizing it’s best to see how much raw data there is and then divide by 3, that will give you the amount of GB space you will need to license for HANA.

The Value of HANA

It’s critical that you understand the value of HANA for your organization.  Although I have no doubt that HANA can provide value for every organization, it’s important look at the business problem and calculate a solid ROI.  The top three use cases we are seeing our customers leverage HANA is:

  • Analytics & Operational Reporting – Often these are issues around a need for real time, detailed data which represents large data sets which currently have unacceptable query times or data latency issues.
  • Accelerate ECC Transactions – In some cases, there are batch processes with a large number of read operations.  These processes if run through HANA can happen as much as 350x faster than in a traditional database.  If there are any batch processes that have a limited batch window, you should consider how HANA might help you reduce the overall processing time.
  • HANA Based Applications– HANA provides the ability to create applications that have never been able to exist before because the data and algorithms applied to that data was so complex, that they were simply unrealistic to consider.  SAP is now working with customers to develop a number of “killer apps” running on the HANA platform.I loved Steve Lucas’ analogy.  Think of HANA as the Gaming System (e.g. Xbox, PS3) and HANA based Applications as the game that makes you buy the platform (e.g. Halo, Uncharted).

In my next post I’ll be writing about why you should leverage SAP to help deliver HANA Value workshops.

«Good BI»

Categories: HANA Tags: , ,

SAP Runs Crystal Reports

December 15th, 2011 8 comments

Say it loud, Say it proud

With the latest development of SAP Business Suite Enhancement Pak 5, SAP users can now enjoy the rich, flexible formatting functionality right from within the application.

ALVs Get An Upgrade

For many years, users have been limited in their ability to provide formatted reporting from an SAP ALV (ABAP List Viewer).  Users could do basic sorting, grouping and filter, but that was about it.  Now the output from an ALV can be pushed directly into a Crystal Report using the new Crystal Reports ALV Adapter.

This solution is currently available for ALV Grid (SAP GUI ALV) and for Web Dynpro ABAP ALV.  ALV List and ALV Classic are not supported.  In my case, I will be running an SAP GUI ALV.

Using the Crystal Reports ALV Adapter

Before installing the adapter you need to be running Microsoft .NET Framework 2.0 or above.  In addition, because I will be running ALVs run directly within the SAP GUI, I must have also installed SAP GUI 7.10 or above.

Downloading the Adapter

One of the hardest things about installing the adapter is finding in the Service Marketplace. Here is a screenshot of the location

Personally, I recommend searching for it.  Here is how I found it:

From the search screen, search for CR ADD-ON

Software Search Screen

Search for CR ADD-ON

You should see CR ADD-ON FOR BS APPS 1.0 come back in the search results.  This is what you want.

Search Results

Once you select CR ADD-ON FOR BS APPS 1.0, you will be able to download the ZIP archive file.

Service Marketplace Software Download

After extracting the application you will see the xSAPCRVAdpt.exe, which is what we will be installing.

Installing the Adapter

Begin the installation by double-clicking on the install file, xSAPCRVAdpt.exe.

Here we will see the Crystal Reports ALV Adapter ready to be installed.

After choosing Next, the installation will begin:

After a few minutes you should see the following message, which indicates the installation was successful.

Configuring the Adapter

The final step after installing the adapter is telling Business Suite to allow Crystal Reports to be used with ALVs.

We need to use tcode, SALV_GUI_CUST

Here we want to make sure we Allow Crystal Reports to be an option.

Next we need to use tcode, SALV_WD_CUST to go into the Web Dynpro Settings

Here we want to make sure we Allow Crystal Reports to be an option.

Testing the Adapter

You can use any ALV Grid (SAP GUI ALV) or Web Dynpro ABAP ALV for testing.  In my case, I’m going to use the t-code KSB1, Display Actual Cost Line Items for Cost Centers.

Note that at the end of the input screen, I select the layout for this report.  This is important because with Enhancement Pak 5, you can save Layouts that leverage Crystal Reports.

Here are the default results in an ALV Grid:

Pretty boring, eh?

Choose the Change Layout Icon

Under the View tab you can change the Preferred View to Crystal Reports.  This will cause the data to be sent to a Crystal Report using the SAP_GenericTemplate.rpt.

Now you can see the output in a Crystal Report.

Modifying the Crystal Report

We first need to get a copy of the report, so we will chose the option “Export Report”.

Export to Crystal Report

By selecting this option, I will be presented with a dialog box which will allow me to Save the Report to my hard drive.  At first, I found this button confusing because I expected it to work like it does in standard Crystal Reports and ask me if I wanted to export the report in PDF.  It left me wondering how I would go about exporting the results to PDF if I wanted to.  Hmm.

NOTE:  Once the report is exported, you can make changes to it using
Crystal Reports 2011, since at the time of this writing Crystal Reports
for Enterprise does not support direct data connectivity.  Also this template
was created in the pre-Crystal Reports 9 format; therefore you could modify
these reports with older versions of Crystal Reports.

When you open the report to modify it, you will see that the data is being pushed into the report via an ADO.NET (XML) database connector.  This means that this report cannot be refreshed from within Crystal Reports during the report modification process.

Rather than showing you the details about how to modify a Crystal Report, I will simply assume that after exporting the report, you have been able to make a number of changes to suit your needs and are ready to load those changes into the SAP List Viewer.

Select your update report from the dialog box:

After after being imported successfully:

… the new report layout will appear on the screen

Saving the Layout for Future Use

Now that we’ve got the new report loaded, we would like choose this view or layout.  SAP accommodates this.  All you need to do is save the layout under a new name by following the prompts in the Save Layout dialog:

In my case I named my new report layout /ZCRKSB1

After the layout has been saved, users can reference it from the Setting section of the original SAP List Viewer prompt screen.  Under the last section, Settings the user can change the default layout by selecting the layout of their choice.

Summary

For many years now, we’ve been talking to customers and partners about the value of embedded analytics.  Now we are finally beginning to see they rolled out in earnest.  Not only is Crystal Reports now embedded directly into SAP Business Suite, but SAP Dashboards (aka Xcelsius) are being provided out of the box for HR, Finance and other key areas.

The only drawback of embedded analytics today is the lack of built-in intelligence about how to navigate the data.  In the SAP List Viewer today, if you click on a column, the List Viewer is intelligent enough to drill to the associated supporting document.  But even with this limitation, there is still real value in better reporting from SAP Business Suite.

«Good BI»