February 17, 2012

Safety Interval Upper Limit and Lower Limit in Generic Data Source


We can see this options if your generic data source is made delta possible. We can define delta for generic data source using 3 different fields.

1) Time stamp
2) Calendar Day
3) Numeric Pointer

The use of safety intervals is to make the system to extract delta records which might missed in last extraction (they might not saved at the time of extraction)

Now we will discuss this with delta field as “Numeric Pointer”

Safety Interval Upper Limit: Let’s assume in the last extraction delta pointer set to 1000(numeric pointer). Now in source system we have 100 new/changed records, so new numeric pointer in source system is 1100. If the safety interval upper limit is set to 10, the selection interval set by the system is 1000 to 1090(system will subtract from new numeric pointer value).

Note: If you use upper limit, it doesn’t result in duplicate records. So you can directly load the data from Data source to Info Cube.

Safety Interval Lower Limit: In same situation instead of upper limit, if lower limit is set to 10, then selections set by the system is 990 to 1100 (system will subtract from last delta numeric pointer).

If the delta field is Time stamp instead of Numeric Pointer, we can set the safety intervals in seconds (say 300s or 1800s etc..).

Note: If you use lower limit, it results in duplicate records. So you should not load this data directly from Data source to Info Cube. It is mandatory to use DSO in your data flow and then load the data to Info Cube.

February 10, 2012

Different ways to run Attribute Change Run(ACR) in SAP BI

Sometimes due to some issue or user requirement we will do master data loading apart from daily loading. After master data loading is done, the first doubt we get is do I need to run ACR or Activating master data is enough.

What is ACR? : We need to run ACR when you have aggregates which are using your characteristics. ACR will update newly added/changed master data to aggregates and activates the master data in characteristic.

What is Activate master data? : Activate master data activates the newly or changed master data(which is in M version), but it will not update the same to aggregates.

So ACR is required if you have aggregates which are suing these characteristics.

Generally in Projects this ACR will be done as part of Process chain after master data loading is done.

How can we run ACR for one characteristic or multiple individually. This can be done in 2 ways.

First:
1) Goto RSA1
2) In menu bar, click on Tools --> Apply Hierarchy/Attribute Change.


3) In the next screen, click on "Info Objects" list. You will get the list of characteristics for which ACR is required. By default all the characteristics will be selected, unselect the characteristics for which you don't want to run ACR.

Note: If you don't get your info object here, then ACR is not required for this characteristic.


4) Now click on execute, you can monitor the job in SM37.

Second:

1) Go to SE38, give the program RSDDS_AGGREGATES_MAINTAIN and click on execute.

2) In the next screen enter the characteristics for which you want to run ACR and execute.This will be done as fore ground.

Note: If you want to run this as back ground job, then in menu bar --> program --> execute in back ground.


Hope it helps...

February 3, 2012

Data Retraction from SAP BW(Through BEx Query) into SAP ECC system

Data Retraction is the process of extracting data from BW and loading it to ECC system. This is very useful for planning data, generally we will create planning data by using BPC or some planning functions.This functionality is useful to make the data synchronize between ECC and BW and compare with real time data.

The below document explains step by step to procedure on how to do this.

How to do Repeat Delta in SAP BI


We have table ROOSPRMSC which stores the successful last delta request time stamp for each data source(if it is delta enabled).

When ever the delta IP got successful, the time stamp will get update in the table (it doesn't bother about whether request is there in target or not). If the over all status of IP comes to green, then time stamp will get updated to system table.

So when ever you have any failed delta request or if you want to load the previous delta request again, follow the below steps.

1) Open the failed request or request you want to run again by going into manage tab of target and click on monitor of request, then change the over all status to red.(you can do the same like, open the delta IP in RSA1 --> on top click on monitor --> in left side you will get all requests)

here system will prompt you message saying that" you have to perform repeat delta".click on "Repeat Delta"

2) Now goto target and make the request red and delete it.

Now run your delta IP, it will pick earlier and present delta records.

Note: If you are already deleted the request from target, then open the request in RSRQ and make the request to Red. This is enough to get the last delta records.

Hope it helps...

Use of Aggregates,Compression, Roll Up and Partitioning in SAP BI


Aggregates:

Aggregates are used to improve query performance. Say you have cube with 30 characteristics and everytime you run query on this cube, it is hitting 10 characteristics frequenty.

So to improve the query performace create Aggregate on those characteristics. Instead of seraching for data in Cube, query will hit the Aggregate first.

Compression:

As we all know, we have two tables in info cube for transaction data(F-table and E-Table). F-table will store facts data and E-table will store compressed data. COmpression also used to improve the query performance and loading performance.

Query Performance:

Compression is nothing but removing request number an aggregating key figure values based characteristics data. We can get same sales documnet in different request(lets assume we got same sales document 5 times into cube in different request). When we compress it will become one record based on sales document number, so when we execute query system has to pick only one record instead of 5 records. this will improves query performance.

Loading Performance:

It is recommended to delete and re-create the index when we load the data into cube. Deleting index will delete the index for data in Ftable and re-creates. If you have huge uncomressed data in cube(F-table is high), delete and create index steps will take log time to complete.

Roll Up:

This is nothing but updating the lastest transaction data to aggregates which is loaded to Info Cube (if you have any aggregates on cube).

Partitioning:

This is also used to improve the query performance and we can do partitioning in two ways

i) Logical partitioning

ii) Physical partitioning(database level partitioning)

refer below links for clear information about partitioning.



Hope it helps...

February 2, 2012

How to de-schedule or reschedule jobs/process chains in SAP in single shot using programs

When we are working in Support projects we often face upgrades or system outages. When there is an outage/upgrade we need to de-schedule all the loads in BW system and reschedule them once system comes from outage/maintenance.

Then the first thing which comes in our mind is how can i stop all the chains running in my system in one shot. If we have huge number of process chains, it is going to be hassle process to perform.

The below blog will give you how we can do this using programs.

Hope it helps...

Why Navigational attribute work as characteristic in Reporting and Modelling in SAP BI


We all know that Fact table is surrounded by dimension table and master data in connected to dimension tables using SID's.

So any master data object which has SID can act as characteristic in reporting or in Cube.

If you add an attribute to any characteristics, we know what are all the tables will generate.

Display time independent -- P Table

Display time depended -- Q Table

Navigational time independent -- X Table

Navigational time dependent --- Y table.

If you check P and Q table in BI system, it will not contain SID values. But if you check X and Y values, it will contain SID values for this characteristic and we will have SID values of main characteristic(to which this is navigational attribute).

The following figure illustrates the relationship between X table and the SID table of navigational attribute

Let’s take we have defined LOCATION as navigational attribute for 0CUSTOMER.





As navigational attributes contains SID values, it will act as characteristics in reporting and Cube.

you can refer the below document for characteristic tables structure.

http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/10db929a-7abb-2e10-6fa5-d8d45ca028ed?QuickLink=index&overridelayout=true

Hope it helps..