Quantcast
Channel: Data Services and Data Quality
Viewing all 236 articles
Browse latest View live

Some tips for fine tuning the BODS job for faster and efficient executions with optimal resource utilizations.

$
0
0

Hello Guys,

 

Often we skip or ignore some of the minimal things which may make your jobs to be executed in a faster way. For the very reason, I had consolidated some key points by which we can make the BODS jobs in more efficient with optimal consumtion of resources. This discussion might me more helpful and efficient to the beginers in this area.

 

 

1. Increase monitor sample rate. ex..to 50K in prod environment.
2. Exclude virus scan on data integrator job logs.
3. While executing the job for first time or when changes occur with re-run. Select the option COLLECT STATISTICS FOR OPTIMIZATION (this is not selected by default).
4. While executing the job second time onwards. Use collected stats.(this is selected by default)
5. Degree of parallelism (DOP) option for your data flow to a value greater than one, the thread count per transform will increase. For example, a DOP of 5 allows five concurrent threads for a Query transform. To run objects within data flows in parallel, use the following Data Integrator features:
• Table partitioning
• File multithreading
• Degree of parallelism for data flows
6. Use the Run as a separate process option to split a data flow or use the Data Transfer transform to create two sub data flows to execute sequentially. Since each sub data flow is executed by a different Data Integrator al_engine process, the number of threads needed for each will be 50% less
7. If you are using the Degree of parallelism option in your data flow, reduce the number for this option in the data flow Properties window.
8. Design your data flow to run memory-consuming operations in separate sub data flows that each use a smaller amount of memory, and distribute the sub data flows over  different Job Servers to access memory on multiple machines.
9. Design your data flow to push down memory-consuming operations to the database.
10. Push-down memory-intensive operations to the database server so that less memory is used on the Job Server computer.
11. Use the power of the database server to execute SELECT operations (such as joins, Group By, and common functions such as decode and string functions). Often the database is optimized for these operations
12. You can also do a full push down from the source to the target, which means Data Integrator sends SQL INSERT INTO... SELECT statements to the target database.
13. Minimize the amount of data sent over the network. Fewer rows can be retrieved when the SQL statements include filters or aggregations.
14. Using the following Data Integrator features to improve throughput:
   a) Using caches for faster access to data
   b) Bulk loading to the target.
15. Always views the SQL that Data Integrator generates and adjust your design to maximize the SQL that is pushed down to improve performance.
16. Data Integrator does a full push-down operation to the source and target databases when the following conditions are met:
• All of the operations between the source table and target table can be pushed down.
• The source and target tables are from the same data store or they are in data stores that have a database link defined between them.
A full push-down operation is when all Data Integrator transform operations can be pushed own to the databases and the data streams directly from the source database to the target database. Data Integrator sends SQL INSERT INTO... SELECT statements to the target database
Where the SELECT retrieves data from the source.

17. Auto correct loading ensures that the same row is not duplicated in a target table, which is useful for data recovery operations. However, an auto correct load prevents a full push-down operation from the source to the target when the source and target are in different data stores.

18. For large loads where auto-correct is required, you can put a Data Transfer transform before the target to enable a full push down from the source to the target. Data Integrator generates an SQL MERGE INTO target statement that implements the Ignore columns with value and Ignore columns with null options if they are selected on the target.

19. The lookup and lookup_ext functions have cache options. Caching lookup sources improves performance because Data Integrator avoids the expensive task of creating a database query or full file scan on each row.

20. You can control the maximum number of parallel Data Integrator engine processes using the Job Server options (Tools > Options> Job Server > Environment). Note that if you have more than eight CPUs on your Job Server computer, you can increase Maximum number of engine processes to improve performance.


Getting started with Data Services & Hadoop

$
0
0


I wanted to learn about how the SAP EIM platform, specifically Data Services, integrated into Hadoop. Being a bit of a techie at heart and the fact that I'm not really into reading manuals I thought I'd gets hands on experience and decided to install my own virtual machine with both Hadoop and Data Services running on it. I know this sort of defeats the purpose of Hadoop with it's distributed file system and processing capabilities but it was the easiest way for me to learn.

 

I'm no Linux expert but with my basic knowledge and with some help from Google (other search engines are available) I decided to install the Intel distribution on a Linux virtual machine. Intel utilise the Intel Manager framework for managing and distributing Hadoop clusters and is relatively straight forward to get up and running. Once installed this provides a nice, easy to use web interface for installing the Hadoop components such as HDFS, Oozie, MapReduce, Hive etc.. These can of course all be installed manually but that takes time and using Intel Manager allowed me to provision my first Hadoop cluster (single node) relatively quickly.

 

A detailed explanation of the different Hadoop components can be found on the Apache Hadoop site - http://hadoop.apache.org/


Once Hadoop was up and running the next step was to install Data Services. I decided to go with Data Services 4.2 and this of course requires the BI Platform so I went with 4.0 SP7 as Data Services 4.2 doesn't yet support 4.1 of the BI platform. I went with the default installation and used the Sybase SQL Anywhere database, that is now bundled with the BI platform install, as the repository for both the CMS and the Data Services.

 

As per the technical manual, Data Services can connect to Apache Hadoop frameworks including HDFS and Hive sources and targets. Data Services must be installed on Linux in order to work with Hadoop. Relevant components of Hadoop include:

 

HDFS: Hadoop distributed file system. Stores data on nodes, providing very high aggregate bandwidth across the cluster.


Hive: A data warehouse infrastructure that allows SQL-like ad-hoc querying of data (in any format) stored in Hadoop.


Pig: A high-level data-flow language and execution framework for parallel computation that is built on top of Hadoop. Data Services uses Pig scripts to read from and write to HDFS including joins and push-down operations.


Data Services does not use ODBC to connect to Hive it has it's own adapter so you must 1st configure this in the management console. The technical manual has all the details and it is fairly straightforward to configure. Make sure you have all the relevant jar files listed in the classpath.

 

The next step in my learning with Hadoop and Data Services is to create a demo scenario. Rather than invent one I'm going to use an existing demo that can be downloaded from the SAP Developer Network and adapt it to work with Hadoop rather than a standard file system & database. I'm going to use the Text Data Processing Blueprints 4.2 Data Quality Management which takes 200 unstructured text files, passes them through Data Services Entity Extraction Transform and loads the results into a target database.

 

I'm going to put these files into HDFS and read them out using the Data Services HDFS file format, pass the data through the standard demo data flow and then load the data into Hive tables.

 

The demo comes with a BusinessObjects Universe and some Web Intelligence reports so time permitting I may port these over to read from Hive as well.

 

I'll hopefully create my 2nd blog once I've completed this with my finding and a recorded demo.

Executing a job by another job in BODS 4.1 using simple script

$
0
0

Step1:

In DATA SERVICES MANAGEMENT CONSOLE goto Batch Job Configuration Tab and click on Export Execution Command.

This will create a .bat file with the job name (Job_TBS.bat) in the following path:

D:\Program Files (x86)\SAP BusinessObjects\Data Services\Common\log\

1.JPG

2.JPG

 

Step2:

Use the below script to check whether the respective .bat file exist in the below path.

exec('cmd','dir "D:\\Program Files (x86)\SAP BusinessObjects\Data Services\Common\log\"*>\\D:\\file.txt');

 

 

Step3:

Create a new job (J_Scheduling) to trigger the job which needs to be executed (Job_TBS).

 

3.JPG

Use the below script to trigger the job.

exec('cmd','D:\\"Program Files (x86)"\"SAP BusinessObjects"\"Data Services"\Common\log\Job_TBS.bat');

 

Now the job J_Scheduling will trigger the job Job_TBSusing simple script

Enabling "Information Now"

$
0
0

Remember when organisations were content with quarterly reporting? It enabled them to plan for the next three months or even the rest of the year. Seeing what happened within their business over the last three months was enough!672269.jpeg

    We are in the “Information Now” era. Give ME the information as it is happening, at the point of sales, in the supply chain, on my mobile. Move over stale and out of date data, its all about "how fresh is your data?"

    Fresh green data isn’t always easy to deliver into the right hands at the right time. System diversity, complexity, mobility, infrastructure constraints, data volume explosions are all there lurking to trip up and stall the data along its journey.

    Businesses are living with an ever-shrinking data batch window and it can be like trying to put the same weekly shop into smaller and smaller bags week on week.

    SAP’s HANA platform is about helping to relieve customers of this paradigm of being shackled by data latency issues. The equation of hyper performance serving up piping hot data will lead businesses to innovation.

    But when customers live with a heterogeneous system landscape how can they get high quality disparate data available faster and at less cost?

    One way is to move less data, only take what you need,  lifting only the things  that have changed. This will greatly help the customer achieve this goal. Change Data Capture is a key strategy  to unlocking data latency.

    I am going to examine this area of using one of the tools in the Hana Platform, Data Services. How it  can be used to apply the most appropriate mechanism for delivering fresh, high quality data through the use of applying various change data capture techniques, it’s a question I get asked a lot.  There are misconceptions that Data Services is a batch driven tool, but it can and does operate in this era of “Information Now".

    These posts will be based on my experiences with variety of customer engagements over the years using Data Services and I am looking to promote a discussion, provide ideas, perhaps save you time (there may even be something worth cutting and pasting into an RFI).  The next post will describe the two main options of delivering data: Push or Pull.

To Push or Pull? That is the question.

$
0
0

cdc.pngOne of the first things to take into account when figuring out the best approach to change data capture is identifying how the data going to be delivered to the Data Services engine. Do you have to go fetch the data or is it dropped in Data Services lap. From a technology point of view usually when data is pushed it is traveling via a message bus, API or Web Service and when pulling data Data Services connects to the source systems and pulls it out.                                                                       

                                                                              

PUSH METHOD

Typically, but not always if a customer is using the push method, Data Services will be running real-time jobs that listen out for messages to arrive along a queue. In this scenario customers “usually” are sending already identified changes (inserts/updates/deletes) along the message bus in XML packages. (One example of this could be real-time SAP idoc’s over ALE).

Most of the hard work is done here for the Data Services developer, the changes are pre-identified and delivered and all that needs doing is deciding on how to apply the delta load to the target and perhaps add a few data quality and transformations along the way! This method is the way a real-time data requirement can be fulfilled using Data Services and is something more customers are looking to do to achieve near real-time data integration.

 

      PULL METHOD

This is the more traditional method of acquiring data and one that every ETL developer is familiar with, Data Services instigates the extraction process from the source system. This is typically a scheduled job set to run at an interval. These intervals can vary from an evening extraction to a batch process that runs at regular intervals  such as the micro-batch in the latest version of Data Services. The Data Services Micro-batch will keep polling the source data until certain conditions are met, this delivers a lower level of time based granularity and simplifies the process of configuring this approach.  Micro-batching can be really useful when customers are looking to trickle feed the data to a target but may not be able to switch on CDC functionality in the source system, micro batching is platform independent where as CDC in Data Services is specific to the type of database being sourced.

When using a pull method the logic to identify changes is more often then not built using standard functions within Data Services such as table comparison or CDC features, however, other options include polling directories for change files, working with Sybase replication server.

So to recap, the reason why customers would consider using the push method with Data services when:

    • They require data to be moved and processed in real-time. An example of this would be real-time address confirmation and cleansing from an ecommerce web portal through web service request to the Data Services engine.
    • When they have a message queue or SOA architecture mechanism for data delivery.
    • Need real time interactions with the data.
    • Data is typically being processing from a single primary source (reaching out to other systems could introduce unwanted latency into the real-time process)

Pull methods are used where:

    • Data latency is less of an issue and the data is processed at different intervals possibly via a schedule.
    • The extracts are driven from Data Services through a batch process.
    • High volume bulk data.
    • Data is integrated from many sources providing complex cross system integration. 
    • Change data identification needs to happen within Data Services

That said there is no technical limitation on the number of sources and targets that can be used within a single batch or real-time dataflow and this is more of a general rule of thumb.   Something to keep in mind is that Data Services primary focus is on data integration and not process integration between different applications within a business architecture. This is where a customer would adopt SAP Process Orchestration (PO).


In the next post I will start delving into Source versus Target based change data capture.

SAP Business Objects Data Services 4.2 Up-gradation steps

$
0
0

Purpose:

The purpose of this document is to up-grade SAP Business Objects Data Services from 11.7/3.X/4.X to SAP Business Objects Data Services 4.2

Overview:

Environment Details:

Operating system: Windows Server 2008 64 Bit

Database:Microsoft SQL Server 2008 R2

Web Application:Tomcat

SAP Business Objects Tools:SAP Business Objects Information Platform Services 4.1 SP2; SAP Business Objects Data Services 4.1 SP2

Migration Tools:Data Services Repository Manager

Repository version:BODI 11.7 or BODS 3.X or BODS 4.X

Installation & Configuration steps to upgrade SAP BODS 3.X/4.X to SAP BODS 4.2:

 

Pre Installation Check List:

 

  • Backup of repositories, configuration files & Data Cleanse files:
    • Backup of Local, Profile & Central Repository
    • Backup of Following Configuration Files:
      • admin.xml
      • sapconnections.xml
      • as.xml
      • dsconfig.txt
    • Backup of Data Cleanse Files
  • Create a Check list of Number of Batch Jobs/Real Time Jobs, Number of Access Server, Number of Job Server & Configuration Path (Datastore) of Source & Target Systems
  • Create a Check list of Jobs Scheduling in SAP Business Objects Data Services Management Console
  • Create a Check list of Groups & Users Available in SAP Business Objects Data Services Management Console

 

Post Installation Check List:

  • Installation of EIM(Enterprise Information Management) or IPS(Information Platform Services) Package in New Landscape
  • Installation of SAP BODS 4.2
  • Installation & Configuration of Address Cleansing Package 4.2
  • Best Practices says use the same Data base that you have use in SAP BODS 4.1 make a cloned copy and Use in SAP BODS 4.2 environment
  • Up-gradation of Local /Central repository Using Data Services Repository Manager
  • Configuration of Repository in EIM or IPS (Package Name) Central management Console.
  • Configuration of SAP RFC in Data Services Management Console 4.2
  • Configuration of Adapters in Data Services Management Console 4.2
  • Apply Rights & Security according to your requirement in BIM or IPS (Package Name) Central management Console
  • Now Login in SAP BODS Designer 4.2 and configure Jobs (Real Time/Batch Jobs)
  • Configuration of Datastore (SAPR3,SAP BW, RDBMS), Substitution Parameters, Job Server, Access Server, Format Data Type (Flat Files, CSV)
  • Now validate the Jobs at Top Level and if error comes fix those errors
  • Now Execute the Jobs using Scheduler or other BI Tools

 

SAP Business Objects Information Platform Services 4.1 SP2 Installation & Configuration Documents:

Prerequisite:

SAP BusinessObjects Business Intelligence (BI) 4.1 SP2 or its higher compatible patches*

OR

SAP BusinessObjects Information Platform Services (IPS) 4.1 SP2 or its higher compatible patches*

Up gradation & Configuration Documents:

When the prerequisite system check completes successfully, click Next

Untitled.png

On the "SAP BusinessObjects Information platform services 4.1 SP2 setup" page, click Next.

Untitled.png

Accept the License Agreement, and click Next.

Untitled.png

Type a password for the CMS Administrator account, and click Next

Untitled.png

To start the installation, click Next.

Untitled.png

Installation is in progress

Untitled.png

Finish

Untitled.png

 

Up gradation & Configuration Documents of Data Services:

 

When the prerequisite system check completes successfully, click Next

 

Untitled.png

On the "SAP BusinessObjects Data Services 4.2 SP1 setup" page, click Next.

Untitled.png

 

Accept the License Agreement, and click Next.

Untitled.png

Click Next to accept the default path to the folder where the program will be installed.

Untitled.png

  Click Next to accept the default language (English).

Untitled.png

Type a password for the CMS Administrator account, and click Next.

Untitled.png

Click Yes to restart the BOIPS services

Untitled.png

To start the installation, click Next.

 

Untitled.png

Installation is in progress

Untitled.png

Finish

Untitled.png

SAP Business Objects Data Services 4.0 Configuration Documents:

Before upgrade of local repository below are the error in SAP Business Objects Data Services Designer

Untitled.png

So before login upgrade the repository using SAP Business Objects Repository manager.Below are the steps for upgrade of  local repository details:

Below are the steps for upgrade of local repository & upgrade of secure central repository

Upgrade of Local Repository:

      Untitled.png

Upgrade of Secure Central Repository:

Untitled.png

After Installation & Up-gradation of SAP business Objects BOE/IPS Server, SAP Business Objects Data Services 4.2SP1 below are the status that i upgraded in VM Ware

Untitled.png
Login to Central Management Console -> Go to Data Services

Untitled.png

Then Click on Data Services and update the repository details of SAP Business Objects Data Services repository.Untitled.png

Now try to login in SAP Business Objects Data Services Designer

Untitled.png

Click on Logon

Untitled.png

Click on “ok”

Untitled.png

Now login in SAP Business Objects Data Services management console

Untitled.png

Click on "Administrator" tab

Untitled.png

New features are added in “SAP Business Objects Data Services management console 4.2":

 

Object Promotion:

  • Import Configuration
  • Export Configuration

 

Untitled.png

 

Export Configuration using two way:

  • FTP
  • Shared Directory

 

 

Substitution Parameter : Now we can change the "Substitution Parameter" settings through SAP Business Objects Data Services Mananegment console also.

 

New Features added in Adapter like "Hive Adapter & VCF Adapter"

 

Changes in "Query and Validation" transform in SAP Business Objects Data Services Designer 4.2

 

Changes in "Architecture" of SAP Business Objects Data Services please refer below upgrade guide for more details.

 

Untitled.png

 

New Features added like "REST web services": Representational State Transfer (REST or RESTful) web service is a design pattern for the World Wide Web. Data Services now allows you to call the REST server and then browse through and use the data the server returns

 

Relevant Systems

 

These enhancements were successfully implemented in the following systems:

 

  • SAP Business Objects Information Platform Services 4.1 SP2/SAP Business Objects Enterprise Information Management 4.1 SP2
  • SAP Business Objects Data Services 4.1 SP2

 

This document is relevant for:

 

  • SAP Business Objects Data Services Administrator

 

This blog does not cover:

  • SAP Business Objects Data Quality Management Up-Gradation

 

Reference Material:

 

Upgrade Guide:

 

https://help.sap.com/businessobject/product_guides/sbods42/en/ds_42_upgrade_en.pdf

 

Installation Guide Windows:

 

https://help.sap.com/businessobject/product_guides/sbods42/en/ds_42_install_win_en.pdf

 

Installation Guide UNIX:

 

https://help.sap.com/businessobject/product_guides/sbods42/en/ds_42_install_unix_en.pdf

 

SAP Data Services 4.2:

 

https://help.sap.com/bods

 

SNotes:

 

1530081 - Troubleshooting tips for SAP Data Services 4.x

https://service.sap.com/sap/support/notes/1530081

 

1723342 - Data Services 4.1 Upgrade Scenarios

https://service.sap.com/sap/support/notes/1723342

 

1725915 - Release Notes for SAP Data Services 4.1 (14.1.0) Ramp-Up

https://service.sap.com/sap/support/notes/1725915

 

1740516 - SAP Data Services 4.x and SAP Information Steward 4.x compatibility with SAP Business Intelligence platform and Information Version

https://service.sap.com/sap/support/notes/1740516

 

1900051 - SAP Data Services 4.x and SAP Information Steward 4.x compatibility with SAP BusinessObjects Business Intelligence platform and Information platform services for inactive releases.

https://service.sap.com/sap/support/notes/1900051

Connectivity Issue in BODS designer

$
0
0

Hi All,

 

I am facing issue in connecting BODS designer from my local system to Client system,that is BODS installed on remote server.

 

The local system and client system are in same LAN connection. I am using the same credentials that are using on server side like system ID and port number of server machine.

 

By using these credentials we are facing connectivity issue as follows.

 

Capture.JPG

 

But by using these credentials, the local machine user is able to connect to BODS on server location by using same IP address mentioned in screenshot.

In same IP address CMC and Information steward is working fine by using web URL from remote machines, only Designer is showing problem in connectivity to server side system.

 

Kindly help us to resolve this issue with your valuable comments.

Missing internal datastore in Designer

$
0
0

I will show how to make the internal datastores visiable, as in note 1618486 we don’t give the method to make the internal datastores, the method
is as follows:

 

You need to add a String ”DisplayDIInternalJobs=TRUE” in DSConfig.txt under the [string] tab like the screenshot:

screen1.png

 

How to find the real DSConfig.txt file?

 

First check if there exist a folder named “Documents and Settings” or “ProgramData” in the C disk.

 

  • For “Documents and Settings” the path maybe: “C:\ Documents and Settings\All
    Users\Application Data\SAP BusinessObjects\Data Services\conf”
  • For “ProgramData” the path maybe: ”C:\ ProgramData\Application Data\SAP
    BusinessObjects\Data Services” or “C:\ ProgramData\SAP BusinessObjects\Data
    Services\conf”
  • If there have both check the “ProgramData” first.
  • If there have none, please make sure that your hidden files and folders
    is set to ”Show hidden files, folders, and drives”, (which set in the Folder
    Options --> View), if the configuration is ok, then you can go to the
    directory where your DS installed, for example D disk, then go to the install
    directory maybe: “D:\SAP BusinessObjects\Data Services\conf”; Moreover, if your
    DS version is DS4.0, in some cases, you may have to go to the “D:\SAP
    BusinessObjects\Data Services\bin” to find the DSConfig.txt file.

 

After you change the DSConfig.txt, restart your DS Designer, or maybe DS Services is advisable.
Then you will find the internal datastores ”
CD_DS_d0cafae2”.

 

At last, in the note 1618486 we just want to describe that, you need to make the internal datastores “CD_DS_d0cafae2” property information just the same as your DB logon information, as the default user name et al. maybe not correct as your own DB logon information is different from it. The table owner is also need to change, because the default table own set in DS is “DBO”, and this may cause errors, as your table owner set in your own DB is not “DBO”. In note 1618486 you can find the method how to change it.

 

screen2.png


Initial Data Migration Over, The Fun Has Just Begun

$
0
0

Where I work the SAP Utilities solution is the heart of our SAP system.  Therefore when it was decided to merge two more utility companies into our existing company, I have to thank our ASUG SAP point of contact Ina Felsheim  who over 2 years ago helped arrange a data migration ASUG webcast for Utilities.

 

The great Ginger Gatling  provided the webcast.  Ginger has since moved on to cloud integration, but I am grateful to both Ina and Ginger for this webcast.  First I am not an expert in the SAP Utilities solution - it covers the gamut of device management, customer service, billing, FI-CA, and so much more.

 

SAP provides a rapid deployment solution for utilities using Data Services, and included in this migration solution are templates to use.   You don't necessarily have to use the Data Services tool to even use the templates provided.  The spreadsheet templates helped provide a framework of the data mapping needed/required to fit into the SAP structures.

 

It also helped to have a SAP Experts subscription, as I listed to a recorded session given by SAP's Paul Medaille  - this session helped me find and locate the needed templates.

 

For Rapid Deployment Solutions, SAP recommends you obtain consulting to help with the solution.  We decided not to do this and do it ourselves.  We installed Data Services in-house, and then used the RDS templates to migrate the SAP Utilities objects such as PARTNER, ACCOUNT, DEVICE, etc.

 

In my mind, here are the basic steps of the migration:

 

Step 1: Download the SAP Mapping Templates

1fig.png

 

This helps provide the framework of the legacy data you have and what it should map to in SAP.  This helped me a great deal as I was not an expert in this SAP area.

 

Step 2: Get the Data Services RDS Solution for Utilities Up and Running

SAP provides RDS installation instructions.

2fig.png

 

Since I didn't have a background in Data Services, I also spent time in a 2 day SAP data migration class - I wrote about it here: Do you still need a live SAP Education Course?

 

I was very impressed by a co-worker who didn't attend any training and immediately understood how to use Data Services.

 

Step 3: Start Populating the Spreadsheets from Legacy to Feed to Data Services

3fig.png

This was the best part of the project, because you populate the spreadsheets by object and this services as a data source to Data Services.  Data Services then generates a file that can be read by SAP's migration tool EMIGALL.

 

Step 4: Use EMIGALL to Load SAP

Another co-worker spent time getting familiar with EMIGALL and it is a very handy tool to load data into SAP.  But to me, the Data Services solution did the heavy lifting for us non-SAP Utilities experts, to get the required, right data in a needed format.

 

I remember another ASUG customer presentation (Kellogg's) on data migration - the goal for data migration should be 0 defects.  Getting close...

 

If you want more details, links, information, please reply below - and I will try to find time to write more about this.

Source Versus Target Based Change Data Capture

$
0
0

fish.jpgMy original plan for this post was to wrap up source and target based changed data capture in a single instalment unfortunately, I seem to have got carried away and will post a follow up on target based CDC in the future.

 

Once the method of data delivery has been established (push or pull) the next area of consideration is how can change data capture (CDC) be applied within Data Services? More often than not, when a project demands data to be extracted many times from the same source system using Data Services, the area of change data capture will be discussed. As the name suggests CDC is all about identifying what has changed in the source system since the data was previously extracted and then only pulling the new or modified records in the next extraction process. The net result is that effective CDC enables users to build efficient data processing routines within Data Services reducing batch windows and the overall processing power.

 

Source Based CDC

With Source based change data capture any record changes are identified at the source and only those records are processed by the Data Services engine. These changes can be pre-identified using techniques such as SAP IDOC’s or Sybase Replication Server (now integrated in the latest version of Data Services) or dynamically identified by using pushdown logic and timestamps etc.

 

There are various methods available with Data Services, I have used a number of these within many different scenarios. With Data Services there is nearly always two or more ways to get where you need to be to achieve the required result and this comes down to the ETL developer’s level of creativity and caffeine one the day. The following are just a rule of thumb that I use, they don’t always work in all scenarios as there are usually many variables that need to be taken into consideration, but as far as my thought processes go these are the different stages I go through when trying to identify the best methods for change data capture.

 

So the customer has said that they want to do CDC the first questions I always ask are:


                                    What is the source database/application?

                                   How much data is in the source system tables we are extracting from?

                                    How often do we need to extract, and what is the batch window?

 

  • If the source table has low data volumes and the batch window is large, then usually I will go for the easiest path especially in a proof of concept, which for me will be reading all of the data every time and applying auto correct load in Data Services to carry out updates and inserts.

 

  • If the source data is changing often and is of high volume, but there is a reasonable overnight batch window, I would typically ask if the source table that I am extracting data out of have a valid time stamp. A valid trustworthy timestamp is key, some system don’t always update the timestamp only on insert for example. If this were available then I would consider looking at timestamp pushdown in Data Services. TimeStamp push down requires a number of job steps to be configured:-

 

      • Firstly a global variable to hold “last run date time” would need to be defined for the job.
      • A database table would need to be created with a date time field in it and JOBID field.
      • The last run date time variable would be populated using a script at the start of the job workflow to get the  last run date time value from the lookup table.
      • Within the Query where you would set the following (sourcedatetime field > lastrundate variable) .
      • Check to see that the where clause is being pushed down by viewing SQL or ABAP. 
      • The last step is back at the workflow level to then use either a script or Dataflow (I prefer dataflow) to then update the lookup table with a new datetime value.


In the latest version of Data Services (4.2) within the Workbench the above timestamp step example above can be configured as part of the replication wizard. If the source system is SAP I would also look at using the CDC functions available within the content extractors as this is preconfigured functionality and doesn’t require any of the above job configuration steps.


If data needs to be extracted at various points throughout the day then the pushdown method could still be an option however, I am always very cautious about impacting performance on the source systems and if there is a chance that performance degradation is going to affect a business-transacting then I would opt for a different approach where possible.

 

  • If the source system is changing regularly, has high data volumes, the data needs to be transfer intraday and the extract should have little/no impact, I would look at either using IDOC for SAP or using the database native CDC mechanisms supported by Data Services. Configuration of these methods are fully documented within the Data Services manuals but typically they require the customer to have some database functions enabled which is not always possible. Also depending on the database type a slightly different mechanism is used to identify changes. This has in the past limited me to when I have been able to  take advantage of this approach.  Within the latest version of Data Services 4.2, configuring database CDC is made easier as this can be done through the wizard within the Workbench or configured users can simply define a CDC method  based on their data store configuration. If the option is greyed out then this datastore type does not support native application CDC.

 

  • If the source data changes frequently and needs to be processed nearly instantaneously and have little or no impact on the source systems, I would consider using a log interrogation based approach or a message queue which has changes pre-identified within the messages (eg iDoc/ALE). For noninvasive log based CDC , Sybase replication working with Data Services enables data to be identified as a change using the database native logs, flagged with their status (insert/update) and shipped to Data Services for processing. If this real-time non invasive approach to data movement is something that is key to a project then I would recommend complimenting Data Services with Sybase Replication Server.

 

When working on site with customer data the source systems and infrastructure will nearly always determine what methods of change data capture can be put to best advantage with Data Services. However, given a free reign the best and most efficient approach without doubt is to carry out the data identification process, as close to the source system as possible, however, that isn’t always an available option.  In the next blog post I will dig a little deeper into using target based change data capture method.

BODS alternatives and considerations

$
0
0

1.SQL transform

When underlying table is altered (add/delete columns) in database the SQL transform should be "UPDATE SCHEMA"

If not this will not pull any records from the table and it neither error nor warns when we validate the job from designer

 

2. Date format

to_date() function

'YYYYDDD' format is not supported by BODS.

Documentation manual don’t provide any information to convert 7 digit Julian dates (Legacy COBOL dates).

We may need to write custom function to convert these dates or get date from underlying database "if database supported this date format"

------Sample function ----------

IF(substr($julian_dt, 1, 4) = 2000 or substr($julian_dt, 1, 2) = 18)

begin

RETURN(sql('DS_Legacy', 'select to_char(to_date('||$julian_dt||',\'YYYYDDD\'),\'YYYY.MM.DD\') from dual'));

END

return decode((substr($julian_dt, 1, 2) = 19), jde_date(substr($julian_dt, 3, 5)), (substr($julian_dt, 1, 2) = 20), add_months(jde_date(substr($julian_dt, 3, 5)), 1200),  NULL );

-------------sample function END ---------------

 

 

3. Getting timestamp column from SQL transform

In SQL transform  a timestamp field is not pulled directly, instead alternatively we can convert that to text or custom format accordingly and pill and convert back to desired date time format.

 

4. When a character field is mapped to numeric field , if the value is not numeric equivalent then the value is converted to NULL.

if the value is equivalent to numeric that is typecast to numeric.

Alternative: ADD nvl AFTER YOU MAP it to numeric field, if you don’t want to populate NULL for that field in target

 

5. for character fields while mapping higher length field to less length fields the value will be truncated and propagated.

if the source value is all blanks NULL will be propagated to the next transform

(something similar to above)

 

6. When using gen_rownum() function:

If this function is used in a query transform in which there is join operation then there is possibility to generate duplicate values.

The issue is not with the function instead the query transform functionality in combination with join and gen_rownum() function.

Reason: - For every transform BODI will generate SQL query and pushes to database to fetch/compute result data

- When joining BODI caches one table and then fetches other table and joins returns the data.

- While caching these row numbers are generated. Here is the issue

-- example When joining table with 100 records(cached) with table 200 records (assuming all 200 match join criteria) then output volume of join

   is 200 records since the row numbers are already generated with 100 records table there will be 100 duplicate values in output.

  

7.BODS 14 version allows multiple users operate simultaneously on single local repository.

This leads to code inconsistency, if the same object (Datastore/JOB/WORKFLOW/DATAFLOW) is being modified by two different users at the same time

the last saving version is stored to the repository.

 

Solution: Mandatory to use central repository concept to check-out-check-in code safely.

 

 

8. "Enable recovery" is one of the best feature of BODS, when we use Try-Catch approach in the job automatic recovery option will not recover in case of job failure.

- Must be careful to choose try-catch blocks, when used this BODS expects developer to handle exceptions.

 

9. Bulk loader option for target tables:

Data directly written to database data files (skips SQL layer), when enables back the constraints even PK may also be not valid because of duplicate values in the column because data is not validated while loading data.

- This error is shown at the end of the job and job will have successful completion with an error saying "UNIQUE constraint is in unusable state"

 

9a. While enabling/rebuilding UNIQUE index on the table if there is any oracle error to enable the index sill the error from BODS log is shown as duplicate values found cannot enable UNIQUE index.

actually the issue is not with data issue is with oracle database temp segment.

 

When used API bulk load option the data load will be faster and all the constraints on the table are automatically disabled and enabled back after the data loaded to the table.

 

10.LOOKUP,TARGETS,SOURCE Objects from data store are hard coded schema names.

When we update schema name at datastore level that is not sufficient to point to updated schema for these objects.

Instead we need to use "Alias" in the data store.

 

11.Static repository instance:

Multiple users can login to same local repository and work simultaneously, When any user updates the repository object those changes are not visible immediately to other logged in users, to reflect those other users should re-login to the repository.

 

12.BODI-1270032

This error shows then we want to execute the job, the job will not start and not even take this to "Execution Properties" window.

Simply says cannot execute so and so job.

If you validate the job the job validates successfully without any errors or issues

 

This may be cause of following issues

1. Invalid MERGE transform (go to merge transform validate, take care of warnings)

2. Invalid validation transform columns (Check each validation rule)

 

Best alternate:

Go to Tools>options>Designer>general un check the option "perform complete validation before job execution" then start the job

now the job fails with proper error message in error log

 

13. How to use global variables in SQL transform:

you can use global variables in SQL Transform in SQL Statement

 

you will not be able to import schema with reference to global variable in SQL Statement, so when importing schema use constant values instead of global variable, once the schema is imported, you can replace the constant with global variable, it will be replaced with the value you set for that variable when job is executed

 

the other thing, I don't think you will be able to retain the value of global variable outside the DF, to verify this add script after the first DF and print the value of variable, is it same as that set inside the DF ?

 

if the data type of the column is VARCHAR the enclose the variable in { }, for example:- WHERE BATCH_ID = {$CurrentBatchID} if its NUMERIC then use WHERE BATCH_ID = $CurrentBatchID

Data Services user and rights management - step by step instructions

$
0
0

Data Services uses the Central Management Server (CMS) for user and rights management. In a stand-alone DS environment, the same functionality is supplied by the Information Platform Services (IPS). Setting up user security is a rather cumbersome process. The procedure for granting access to a DS developer consists of four steps:

 

  • Create the user
  • Grant access to the DS Designer application
  • Grant access to one or more (or all) repositories
  • Allow automatic retrieving of the DS repository password from the CMS

 

1. Creating the user


By default, the DS installation program does not create any user accounts. Use the “Users and Groups” management area of the CMC to create users.

1.png

 

Figure 1: User List

 


Right click on the “User List” entry, select New > “New User” and specify the required details.

2.png


Figure 2: Create New User

 

Select the “Create & Close” button to finalize this step.

 

2.  Granting access to DS Designer

 

User name and password are entered in the DS Designer Repository Logon window.

3.png

 

Figure 3: DS Repository logon

 

2.1. User management

 

Unfortunately, the newly created user only has a limited number of access rights by default. More specifically, authorization to run DS
Designer is not granted automatically.

When trying to start the application with this user and password, access is denied:

4.png

 

Figure 4: Access Denied

 

Access can be granted to an individual user in the Applications area of the CMC. Right-click “Data Services Application” and select “User Security”.

5.png

 

Figure 5: Applications area in CMC

 

Select the “Add Principals” button:

6.png

 

Figure 6: User security

 

Select the user from the “User List” in the “Available users/groups” panel and select the “>” button to move it to the “Selected users/groups” panel.

7B.png7A.png

 

Figure 7: Add Principals

 

Select the Advanced tab and then the “Add/Remove Rights” link.

8.png

 

Figure 8: Assign Security

 

Grant access to Designer and select OK.

9.png

 

Figure 9: Add/remove Rights

 

2.2. Group management


As mentioned above, the DS installation program does not create any default user accounts. But it does create several default group accounts. One of these groups is called “Data Services Designer”. Members of this group automatically have access to the DS Designer.


After creating a new user, assign it to this group account. That will grant the user with access to DS Designer, the same result as with
the explicit user-level grant, but achieved in a much simpler way.


Return to the “Users and Groups” management area of the CMC. Right-click on the user and select “Join Group”.

10.png

 

Figure 10: Users and Groups

 

Select the group from the “Group List” in the “Available groups” panel and select the “>” button to move it to the “Destination Group(s)” panel and hit OK.

11B.png11A.png

 

Figure 11: Join Group

 

3.  Granting access to the repositories


When an authorized user connects to the DS Designer application, following error message is displayed:

12.png

 

Figure 12: No repositories are associated with the user

 

That is because a user in the “Data Services Designer Users” group has no default access to any of the DS repositories:

13.png

 

Figure 13: Access control list: No access by default

 

If a user needs access to a given repository, that access has to be explicitly granted to him.

 

Navigate to the “Data Services” area in the CMC. Right-click on the name of the repository and select “User Security”.

14.png

 

Figure 14: Data Services

 

The "User Security" dialog box appears and displays the access control list for the repository. The access control list specifies the users and groups that are granted or denied rights to the repository.

15.png

 

Figure 15: User Security

 

Select the “Add Principals” button. Then select the users or groups from the “User List” or “Group List” respectively in the “Available users/groups” panel and select the “>” button to move it to the “Selected users/groups” panel. Finally, select “Add and Assign Security”.

16.png

 

Figure 16: Add principals

 

Select the access level to be granted to the user or group:

 

  • To grant read-only access to the repository, select “View”.
  • To grant full read and write access to the repository, select “Full Control”.
  • To deny all access to the repository, select “No Access”.

 

Select the “>” button to move it from the “Available Access Levels” to the “Assigned Access Levels” panel. And hit OK.

17.png

 

Figure 17: Assign security

 

Note: By applying the same method at the level of the Repositories folder in the “Data Services” area in the CMC, the user will be granted the same access level to all repositories at once. Both mechanisms can be combined to give the developers full control over their own repository and read access to anybody else’s:

 

  • Grant View access to every individual developer (or to the “Data Services Designer Users” group or to a special dedicated group, for that matter) at the level of the Repositories folder. Make sure that, when using the default group for this, it comes with the default settings. If it doesn’t, simply reset security settings (on object repositories and on all children and descendants of object repositories) on the default group before attempting this operation.
  • Grant “Full Control” access to every individual developer for his own repository.

 

When logging in to DS, developers see the full list of repositories they are granted access to. A value of “No” in the second column means full access, “Yes” means read-only.

 

18.png

Figure 18: Typical DS Designer logon screen

 

 

Don’t make the list too long. The logon screen is not resizable. And scrolling down may become very tedious!

 

4.  Retrieving the DS repository password from the CMS

 

The users can now connect to the repositories from within DS Designer. When he starts the application, as an extra security feature, he is prompted for the (database) password of the repository:

19.png

 

Figure 19: Repository password

 

If this extra check is not wanted, it can be explicitly removed.

 

Return to the "User Security" dialog box that displays the access control list for the repository. Select the User, then the “Assign Security” button.

 

In the “Assign Security” dialog box, select the Advanced tab and then the “Add/Remove Rights” link.

20.png

 

Figure 20: Assign Security

 

Grant both “Allow user to retrieve password” and “Allow user to retrieve password that user owns”  privileges and hit OK.

21.png

 

Figure 21: Add/remove Rights

 

DS Designer will not prompt for a database password anymore when the user tries to connect to this repository.

 

Note: By applying the same method at the level of the Repositories folder in the “Data Services” area in the CMC, this extra check will be removed from all repositories accessible by this user at once.

A KISS approach to naming standards in Data Services

$
0
0

A strict naming schema for all DS objects (projects, jobs, workflows, dataflows, datastores, file formats, custom functions) is essential when working in a multi-user environment. The central repository has no folder concept or hierarchy or grouping functionality. The only way to distinguish between objects from one grouping and another one is by name. Most effective approach for naming objects is based on prefixing.

 

Note: In order to display the full names of DS objects in the Designer workspace, increase the icon name length. You do this by selecting Tools --> Options from the Designer menu, then expand Designer and select General. In this window, specify the value of 64 in the “Number of characters in workspace icon name” box.

 

1.png

 

General note: Versioning should not be handled by naming conventions. So, never include a version number in an object name. Use the central repository concept for maintaining successive versions of any object.

 

 

1.  Reusable objects

 

 

ObjectNaming ConventionExample
Project<project_name>BI4B
Job<project_name>_<job_name>BI4B_D
Workflow contained in one job only

<project_name>_<job_name>_[XT|TF|LD|AG…]

<project_name>_<job_name>_[XT|TF|LD|AG…]_<workflow_name>

<project_name>_<job_name>_<workflow_name>

BI4B_D_XT

BI4B_D_LD_Facts

BI4B_D_Init

Workflow that is reused

<project_name>_COMMN_[XT|TF|LD|AG…]_<workflow_name>

COMMN_[XT|TF|LD|AG…]_<workflow_name>

BI4B_COMMN_Init

COMMN_ErrorHandling

Dataflow contained in one job only<project_name>_<job_name>_[XT|TF|LD|AG…]_<dataflow_name>BI4B_D_LD_Opportunities
Dataflow that is reused<project_name>_COMMN_[XT|TF|LD|AG…]_<dataflow_name>

BI4B_COMMN_LD_JobCycles

COMMN_LD_Jobs

Embedded Dataflow<project_name>_<job_name>_[XT|TF|LD|AG…]_<dataflow_name>_EMBBI4B_D_LD_Employees_EMB
ABAP Dataflow<project_name>_<job_name>_XT_<dataflow_name>_ABAPBI4B_D_XT_KNA1_ABAP
Custom Function contained in one job only<project_name>_<function_name>BI4B_getDate
Custom Function that is reusedCOMMN_<function_name>COMMN_dateKey

 

 

 

1.1. Projects: <project_name>


Give every DS project a 5-character short name. The name has to be short, because it will be used as a prefix for the name of all reusable objects defined within the project.

 

E.g.: P2345

 

 

 

1.2. Jobs: <project_name>_<job_name>


Give every job a 5-character short name. Use < project name>_ as a prefix for the job name. The name has to be short, because it will be used as a prefix for the name of all workflows and dataflows defined within that job.

 

E.g.: P2345_J2345

 

 

 

1.3. Workflows: <project_name>_<job_name>_[XT|TF|LD|AG…][_<workflow_name>]


Name every workflow with <project_name>_<job name>_ as a prefix. Use COMMN_ as prefix for shared workflows, used across projects, <project_name>_COMMN_ when used in multiple jobs within a given project.

 

Workflows are often used to group dataflows for serial or parallel execution. In a typical ETL job, dataflows are executed in “stages”: a first set of dataflows have be executed (in parallel) before a next set can be started; and so on. A data warehouse loading job may extract data from the sources, load them into staging, optionally transform from staging-in to staging-out before loading into the core EDW and aggregating into the semantic layer.

 

Distinguish between job stages by extending the prefix with a 2 character code:

 

  • XT: extract from source to staging
  • TF: transform from staging-in to staging-out
  • LD: load from staging into the core EDW layer
  • AG: load (physically aggregate) from core to semantic layer

 

The workflow name will be used as a prefix for the name of all embedded workflows and dataflows.

 

E.g.: P2345_J2345_XT

 

Within a workflow, objects (scripts, sub-workflows, dataflows) must either all be defined in parallel or all sequentially, and will be executed as such. There is no limit to the number of objects within a workflow. When the number of objects is higher than the number of processors available, DS will internally control the execution order of embedded parallel objects. Only when there are fewer objects than the number of processors available, they will really be executed in parallel.

 

Complex hierarchical structures can be defined by nesting workflows. There is no limit to the number of nesting levels, either. With nested workflows, use a name (_Facts for facts extraction or load, _Dims for dimension processing…) combined with an outline numbering scheme (1, 11, 111, 112, 12, 2…).

 

E.g.: P2345_J2345_LD_Dims21

 

Some workflows may not contain dataflows at all; they only contain not reusable objects. In that case, just name the workflow according to its function.

 

E.g. for a workflow embedding an initialization script: P2345_J2345_Initialise


 

1.4. Dataflows

 

DS supports three types of dataflows. The dataflow names must be unique across the different types. To distinguish the embedded and ABAP dataflows from the regular ones, use a suffix in their name.

 

  • Regular dataflows: <project_name>_<job_name>_[XT|TF|LD|AG…]_<dataflow_name>

 

According to design and development best practices there should only be a single target table in a dataflow. Name a dataflow according to that target table.

 

Use <project_name>_<job name>_ as a prefix. Use COMMN_ as prefix for shared dataflows, used across projects, <project_name>_COMMN_ when used in multiple  jobs within a given project. Distinguish between dataflow locations (extract, transform, load, aggregate…) by extending the prefix with a 3 character code (XT_, TF_, LD_, AG_...) as from the embedding workflow.

 

E.g.: P2345_J2345_XT_S_TABLE1, P2345_J2345_LD_TargetTable

 

  • Embedded dataflows: <project_name>_<job_name>_[XT|TF|LD|AG…]_<dataflow_name>_EMB

 

Name every embedded dataflow with <project_name>_<job name>_ as a prefix; use _EMB as a suffix for the dataflow name. Distinguish between dataflow locations (extract, transform, load and aggregate) by extending the prefix with a 3 character code (XT_, TF_, LD_ and AG_).

 

E.g.: P2345_J2345_LD_TargetTable_EMB

  

  • ABAP dataflows: <project_name>_<job_name>_XT_<dataflow_name>_ABAP

 

An ABAP dataflow is always used as a source in a regular dataflow. Reuse that name for the  ABAP dataflow and add _ABAP as a suffix to make it unique.

 

E.g.: P2345_J2345_XT_S_TABLE1_ABAP

 

 

 

1.5. Custom Functions: <project_name>_<function_name>

 

Give every Custom Function a descriptive name. Use <project_name>_<job name>_ as a prefix. Use COMMN_ as prefix for shared custom functions, used across projects.

 

E.g.: P2345_TrimBlanksExt

 

 

 

2.  Datastores: [SAP|BWS|BWT|HANA…]_<datastore_name>


As datastores are often used in multiple projects, they do not follow the same naming conventions as for other reusable projects.

 

Name a datastore in line with its physical name, and make the prefix depend on the object’s type:

 

 

Datastore TypeDatabase TypeNaming ConventionExample
DatabaseSQL Server SQL_SQL_OC4A1
DatabaseOracle ORA_ORA_ITSD
DatabaseTeradata TD_TD_Staging
DatabaseDB2 DB2_DB2_MDM
DatabaseMySQL MySQL_MySQL_GEB
DatabaseSybase ASE ASE_ASE_CRN
DatabaseSybase IQ IQ_IQ_CMDWH
SAP Applications SAP_SAP_MDR
SAP BW as a source BWS_BWS_Achme
BW as a target BWT_BWT_Hana
SAP Hana HANA_HANA_DB
Adapter AS_AS_Nexus
Web Services WS_WS_Weather

 

Note 1: Pay attention when choosing datastore names. A datastore name cannot be changed anymore once the object has been created.

 

Note 2: Landscape-related information should not be handled with datastore names. So, never include a physical system indicator (DEV, T, QA…) in a datastore name. Landscape information should be configured using datastore configurations. Create one datastore, then create a datastore configuration for every tier (development, test, QA, production…) in the landscape.

 

 

 

3.  File formats: <project_name>_<file_format_name>

 

 

Reuse the file name for the format name of a project-specific file. Use < project name>_ as a prefix.

 

E.g.: P2345_ISO_Cntr_Codes

 

Note: Pay attention when choosing a file format names. A file format name cannot be changed anymore once the object has been created.

 

 

 

4.  Not reusable objects

 

Because not reusable objects are only defined within the context of a workflow or a dataflow, no strict naming standards are necessary. Names will only serve documentation purposes.

 

Use meaningful names for workflow objects (Script, Condition, While Loop, Try, Catch).

 

Do not change the transform names unless you want to stress the specific purpose of a Query transform, e.g. Join, Order, OuterJoin…

SAP Data Services 4.x Certification Tips

$
0
0

Hi All,

 

We all know that SAP Education, recently released a new certification on SAP Data Services 4.x version. Regarding this, one blog was published by Ina Felsheim long back. To see the full details of the blog, click here: http://scn.sap.com/docs/DOC-44139

 

I had many doubts regarding Certification on 4.x version like what modules are included i.e., only Data Integration or both both Data Integration and Data Quality, Is Text Data Processing included, Is SAP extraction is included or not and so on. Even after referring the above blog and going through other certification related threads in SCN and other portal, my doubts were not fully clarified. Last week, i cleared my certification on SAP Data Services 4.x version and by that all my doubts are fully clarified.

 

I thought, to share my experience, topics covered and tips in SCN and i hope this blog will help to those who are going to attempt / taking certification in future. This blog will clear all the doubts on 4.x certification. Also, refer the above blog which will helps a lot.

 

1. Certification Title: SAP Certified Application Associate - Data Integration with SAP Data Services 4.x


As title says Data Integration with SAP Data Services 4.x, the certification includes only Data Integration topics i.e., Only Data Integrator Transforms are covered not Data Quality Transforms. So, no need to prepare Data Management related topics for this certification.


2. SAP Data Services Version: 4.x (4.0 / 4.1 / 4.2) - Preferable to go through with 4.1 / 4.2 version User Manual for preparation.


3. Topics Included: As mentioned in above blog, BODS10 is the base Training Material and all the topics in BODS 10 are covered in this certification.


4. Additional Topics Included (apart from BODS 10): Text Data Processing (TDP), Performance Optimization and SAP Extraction


5. Apart topics covered in BODS10 and additional topics listed in point 4 above, this certification also covers 30% hands on experience i.e., scenario based related questions. So, it is preferable to have minimum working experience on SAP Data Services 4.x version.


6. Cut off Marks: Total questions - 80, Cut of Marks - 64 % (i.e., minimum you have to correct 52 Questions), Total Time - 180 Min., Question Pattern - Objective Type (Single or Multiple answers - for multiple answers, clearly mention how many are correct)


7. Reference Material: As per my experience, check BODS10 for the topics and for preparation better go through with User Manual 4. 1 or 4.2 version (you can download from SAP Help portal), concentrate more on below topics:


  • Designer Guide
  • Reference Guide
  • Performance Optimization Guide
  • Supplement Guide for SAP

 

8. CMC & DS Workbench Topics: As per my experience, no questions are covered from BOE CMC (also called BIP or IPS) & Data Services Work Bench

 

9. Other Tips: This version certification covers questions more from knowledge or awareness of SAP Data Services tool, but as i said more than 30 % of questions are scenario based, working experience is must to take the certification.

 

I hope the above points / tips will help you for preparation to take the SAP Data Services Certification in 4.x version.

 

All the Best!

 

Thanks,

Ramakrishna Kamurthy

How to Start/Stop Data services Server on LINUX

$
0
0

This is relevant if you like to restart Data Services Job Server on Linux box.

 

 

Step 1:- Login Via Putty,

 

Input :- Provide hostname

 

Hit Open

DS_JObServer_Start_Stop_Putty_01.jpg

 

Step 2 :- Go to Installation directory of Data Services on Linux box. Generally referred as LINK_DIR in all SAP documents

IF you logged in with named user, then check where you are, user PWD command on linux prompt once you successfully login to server.

$ cd $LINK_DIR/bin/

 

DS_JObServer_Start_Stop_Putty_02.jpg

Step 3 :- Now you are in bin directory. Execute the following comands there.

$ . ./al_env.sh

$ ./svrcfg

 

$ . ./al_env.sh

$ ./svrcfg

Unable to open lock file </usr/sap/FDQ/businessobjects/dataservices/lock/svrcfg.lock>

Exiting ...

If you do not have authorizations to execute above commands then you will see above msg.

 

then switch to user which has authorizations to perform above tasks.

 

$ su - sapdsfdq

Password:

-bash-4.1$ pwd

/usr/sap/FDQ

Again do the same, navaigate to LINK Directory, where Data Services is installed.Once you in bin directory execute the above script and command

 

-bash-4.1$ . ./al_env.sh
-bash-4.1$ ./svrcfg

** Data Services Server Manager Utility **

       1 : Control Job Service
       2 : Configure Job Server
       3 : Configure Runtime Resources
       4 : Configure Access Server
       5 : Configure SNMP Agent
       6 : Configure SMTP
       7 : Configure SSL
       8 : Configure Native Component Supportability
       x : Exit

 

You will see the Data Services Server Manager command line utility. This will allow you do lot of activities, but at this moment we are interested in Stopping and Starting Data services Jobserver Only.

 

Step 4: - Provide the Options at command line.

 

-bash-4.1$ . ./al_env.sh
-bash-4.1$ ./svrcfg

** Data Services Server Manager Utility **

       1 : Control Job Service
       2 : Configure Job Server
       3 : Configure Runtime Resources
       4 : Configure Access Server
       5 : Configure SNMP Agent
       6 : Configure SMTP
       7 : Configure SSL
       8 : Configure Native Component Supportability
       x : Exit


Enter Option: 1


-----------------------------------------------------------------

                **  Control Job Service **

-----------------------------------------------------------------

Job Service Executable Path                           Status
----------------------------------------------      ------------

/usr/sap/FDQ/businessobjects/dataservices/bin/AL_JobService      Running

-----------------------------------------------------------------

s: Start Job Service       o: Stop Job Service       q: Quit


Enter Option:

We have provided option becuase we like to control Job server so that we can stop/start it.

 

After providing Input as "1", we can see that Job server is currently running.

 

So we are interested to stop the Job sever, hence we will provide further option as "o"

 

-----------------------------------------------------------------

                **  Control Job Service **

-----------------------------------------------------------------

Job Service Executable Path                           Status
----------------------------------------------      ------------

/usr/sap/FDQ/businessobjects/dataservices/bin/AL_JobService      Running

-----------------------------------------------------------------

s: Start Job Service       o: Stop Job Service       q: Quit


Enter Option: o
Waiting for Job Service to terminate. This will take several seconds.
Please Wait!!!
01-25-14 10:37:01 (15279:1057101600) JSERVICE: Shutting down AL_JobService ...
01-25-14 10:37:05 (15279:1057101600) JSERVICE: AL_JobService has been Stopped.
-----------------------------------------------------------------

                **  Control Job Service **

-----------------------------------------------------------------

Job Service Executable Path                           Status
----------------------------------------------      ------------

/usr/sap/FDQ/businessobjects/dataservices/bin/AL_JobService      Not Running

-----------------------------------------------------------------

s: Start Job Service       o: Stop Job Service       q: Quit


Enter Option:

 

Now it is showing that Job Server is not Running

 

Enter Option: s
Checking for existence of AL_JobService...
Starting AL_JobService. This may take several seconds.
Please Wait!!!
01-25-14 10:41:23 (21115:3316356896) JSERVICE: Attempting to Start JobServer(s)..
01-25-14 10:41:23 (21115:3316356896) JSERVICE: Found 1 JobServer(s) configured.
01-25-14 10:41:23 (21115:3316356896) JSERVICE: Using checkJobServer Version <1>
01-25-14 10:41:23 (21115:3316356896) JSERVICE: Attempting to Start AccessServer(s)..
01-25-14 10:41:23 (21115:3316356896) JSERVICE: Found 0 AccessServer(s) configured.
01-25-14 10:41:23 (21115:3316356896) JSERVICE: Attempting to Start SNMP Agent.. Agent is not enabled
01-25-14 10:41:23 (21115:3316356896) JSERVICE: Successfully started AL_JobService


Please exit this utility to start Job Server(s)/AccessServer(s).
Any changes to the configuration will be reflected ONLY after you EXIT this utility.
Press <Enter> to return to options menu.

-----------------------------------------------------------------

                **  Control Job Service **

-----------------------------------------------------------------

Job Service Executable Path                           Status
----------------------------------------------      ------------

/usr/sap/FDQ/businessobjects/dataservices/bin/AL_JobService      Running

-----------------------------------------------------------------

s: Start Job Service       o: Stop Job Service       q: Quit


Enter Option:

 

 

Enter Option: q

** Data Services Server Manager Utility **

       1 : Control Job Service
       2 : Configure Job Server
       3 : Configure Runtime Resources
       4 : Configure Access Server
       5 : Configure SNMP Agent
       6 : Configure SMTP
       7 : Configure SSL
       8 : Configure Native Component Supportability
       x : Exit


Enter Option: x
-bash-4.1$

 

Now we have successfully restarted the Job Server.


Bugs in BO Data Services

$
0
0

Hi all,

           This is a blog in which i would like to discus the bugs we come across the SAP BO Data Services.

 

  1. Lookup_ext

               Lets think that we have 3 Query transforms. In first Query transform we are performing a lookup_ext to get a value (say lookupvalue). So we will have a column lookupvalue in Query2. Now if we right click 'lookupvalue' and select Map To Output. Now the Query3 will throw an error. If we look at the error we can see the mapping for lookupvalue in Query3 is follows

     Query2.lookup_ext.lookupvalue

Where as it should have been

     Query2.lookupvalue

    

     This happens only when we are doing the Map To Output option. If we are dragging and dropping it to the output panel this error will not come up.

Improve performance of SAP Business Objects Data Services Management console 4.x.

$
0
0

Purpose:

The purpose of this document is to improve performance of SAP Business Objects Data Services Management console 4.x.

Overview:

Environment Details:

Operating system: Windows Server 2008 64 Bit

Database:Microsoft SQL Server 2008 R2

Web Application:Tomcat

SAP Business Objects Tools:SAP Business Objects Information Platform Services 4.1 SP2; SAP Data Services 4.2 SP1

Repository version:BODS 4.X

 

I got real problems with performance in SAP Business Objects Data Services Management Console. For instance, my real time services, Jobs & Adapters were opening very long time.


All Status in SAP Business Objects Data Services Management Console will take more time to load


In LINUX Operating System below are the settings:


  • Let's modify the Tomcat setting, because by default they have low value.
  • JavaHeapSize (-Xmx) from 2G to 4G
  • MaxThreads from default (200) to 900
  • JavaHeapSize we should modify:

cd <boe/boips inst folder>/sap_bobj/tomcat/bin

modify setenv.sh

# set the JAVA_OPTS for tomcat

Code:

JAVA_OPTS="-d64 -Dbobj.enterprise.home=${BOBJEDIR}enterprise_xi40 -Djava.awt.headless=true -Djava.net.preferIPv4Stack=false –Xmx4g -XX:MaxPermSize=384m -XX:+HeapDumpOnOutOfMemoryError -Xloggc:<bo_inst_folder>/sap_bobj/tomcat/logs/tomcat.gc.log -XX:+PrintGCDetails -XX:+UseParallelOldGC"

 

  • MaxThreads we would modify:

cd <boe/boips_inst_folder>/sap_bobj/tomcat/conf modify
server.xml
Define a non-SSL HTTP/1.1 Connector on port 8080


Code:

    -->
<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" maxThreads="900" URIEncoding="UTF-8"/>

<!-- A "Connector" using the shared thread pool-->

<!--

<Connector executor="tomcatThreadPool"

  port="8080" protocol="HTTP/1.1"

  connectionTimeout="20000"

  redirectPort="8443" />

-->

  • Cleaning of old installed patches and service packs from BOE/BOIPS. What does it mean?
    Just go to <boe/boips inst folder> and run ./modifyOrRemoveProducts.sh (for Linux):
    It is clear that i have many various installation on my server. But it is important to delete old installation!
  • Cleaning of log directory, in <boe/boips inst folder>/sap_bobj/tomcat/logs
  • Cleaning of logging directory, in <boe/boips inst folder>/sap_bobj/logging
  • Cleaning of Access Server old logs directory, in <boe/boips inst folder>/DataServices/conf/<Access Server Name>error<date>.txt & trace<date>.txt old logs
  • Cleaning of Job Server old logs directory, in <boe/boips inst folder>/DataServices/adapters/logs
  • Cleaning of old Adapter logs directory, in <boe/boips inst folder>/adapters/logs
  • Cleaning of old RFC logs directory, in <boe/boips inst folder>/
  • Deleting Trace-Files
  • Setting in Central management console to delete old logs of Data services older than 5 days

By Default:

By Default - DS Settings.png

After doing changes:

By Default - DS Settings I.png

  • Tuning APS, for start you can use CMC Wizard.
  • APS properties for tracing set to unspecified (-> means bo_trace.ini are used)



In Windows Operating System below are the settings:


  • Let's modify the Tomcat setting, because by default they have low value.
  • JavaHeapSize (-Xmx) from 2048 to 4096
  • MaxThreads from default (200) to 900


Code:

set PATH=%PATH%;C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win64_x64\

set JAVA_HOME=C:\Program Files (x86)\SAP BusinessObjects\SAP BusinessObjects Enterprise XI 4.0\win64_x64\sapjvm\

set JAVA_OPTS=%JAVA_OPTS% -Xmx2048m -XX:MaxPermSize=384m


Path: cd <boe/boips inst folder>/SAP BusinessObjects\tomcat\bin\setenv

 

Tomcat Settings.png

Path: cd <boe/boips inst folder>/SAP BusinessObjects\tomcat\conf\server.xml

 

Maxuimum Thread.png

Best Architecture for Deployment of SAP Business Objects Data Services: 

 

There are many pros and cons for deployment of SAP Business Objects Data Services with SAP BOE 4.X, as per the Best practices below are the possible scenario

Possible Architecture.PNG

 



Reference Material:

 

YouTube Video: Splitting BI 4.0 Adaptive Processing Servers

http://youtu.be/2Uyi0V7RdwA

 

 

Best Practices for SAPBO BI 4.0 Adaptive Processing Servers 

http://scn.sap.com/docs/DOC-31711


SAP BI 4.0 Ecosystem how-to videos

http://scn.sap.com/docs/DOC-34251


Performance Optimization Guide - SAP Data Services 4.2:

 

http://help.sap.com/businessobject/product_guides/sbods42/en/ds_42_perf_opt_en.pdf


 

Administrator Guide - SAP Data Services 4.2:

 

http://help.sap.com/businessobject/product_guides/sbods42/en/ds_42_admin_en.pdf

 

 

SAP Data Services 4.2:

 

https://help.sap.com/bods

 

SNotes:

 

1640036 Added memory usage when splitting Adaptive Processing Server (APS) services

https://service.sap.com/sap/support/notes/1640036

 

1580280 Adaptive Processing Server and Adaptive Job Server in SAP BI 4.0 are using high amounts of memory and are hard to manage and troubleshoot

https://service.sap.com/sap/support/notes/1580280

 

1694041 - BI 4.0 Consulting:- How to size the Adaptive Processing Server

https://service.sap.com/sap/support/notes/1694041

 

1452165 - How to increase the maximum memory pool for Apache Tomcat used by Management Console - Data Services / Data Integrator

https://service.sap.com/sap/support/notes/1452165

 

1452186 - What is the maximum memory pool limit for Apache Tomcat used by Management Console? - Data Services / Data Integrator

https://service.sap.com/sap/support/notes/1452186

 

1644004 - Enable or disable RFC Server trace logging (traces DQM traffic between SAP & DS)

https://service.sap.com/sap/support/notes/1644004

 

1529071 - Sizing how many RFC Servers you should use

https://service.sap.com/sap/support/notes/1529071

 

1544413 - Troubleshooting RFC Server - composite note

https://service.sap.com/sap/support/notes/1544413

 

1764059 - Troubleshooting RFC Server - can't find log file

https://service.sap.com/sap/support/notes/1764059

Reviews from Social Media Data Extraction

$
0
0

It was Friday at work and didn’t had much of work to do .
While my surf at SCN  surfing came across a Blog
from Hillary Bliss ..

Sharknado Social Media Analysis with SAP HANA and Predictive Analysis


I had already planned for movie The Wolf Of Wall Street for the evening and had heard some good reviews of it
(It has to be Caprio always has Good ones   ..where the Hell is his Oscar!! ).

Thought of trying to get the reviews from Twitter using the basic outlines of the Text Data Processing Blueprints.

Here are the steps and my explorations.

 

Data Extraction

Twitter provides an open Search API that provides an option to retrieve "popular tweets" in addition to real-time search results.

Source data consists of unstructured text in form of tweets which are retrieved from Twitter REST based Search API with the search term. Can check this out at.. twitter developer console..

 

Capture.PNG

 

As in the Blueprint:

  • Step 1: Create an account on https://dev.twitter.com
    • Because Twitter is a third-party application, these steps may change.
      • Create or log into your Twitter developer account at http://dev.twitter.com.
      • In My Applications, create a new application with a unique name and a placeholder URL
      • Open the OAuth settings to locate the Consumer Key and the Consumer Secret values, and add the values to the search.cfg file.
      • Create an access token and refresh the page.
      • Add the access token and the access token secret values to the search.cfg file.
      • Edit the search.cfg configuration file to specify the terms or hashtags that are used for the Twitter search.

Capture2.PNG3.png

 

Dataflow Development

  • So now we have the source we are good to design a Data Services Job and Dataflow to extract and analyze the sentiment in tweets  .
  • Create a Job in the BODS Designer and develop the dataflows for the process

  Below is the implementation screenshot of the Dataflow

4.png


Twitter Search Dataflow:

  • This dataflow primarily connects to the Twitter API and extracts the tweets and loads it into the Database tables
  • It uses two User Defined Transforms which has the python code  to connect to the twitter

5.png

  • GET_SEARCH_TASKS:  It’s a User defined Transform which prepares the inputs for the twitter search API by extracting information from Search.cfg file.
  • Search Twitter Transform: It’s a User defined Transform which retrieves the tweets from Twitter Search API

 

Sentiment Analysis:


  • Twitter Process Dataflow:
    • This dataflow uses the Base Entity Extraction Transform of TEXT DATA PROCESSING and analyzes the sentiments in the tweets.

6.png

  • Entity_Extraction_Transform:
    This transform extracts the basic entities for sentiment analysis.
    It uses the English language module and the “english-tf-voc-sentiment.fsm” rule file provided by SAP for the analysis.

 

7.png

 

Run the Job.

And get reviews in general from Twitter Public stream.

8.png

 

Though the tables can be used to build universe and and detailed reports could have been generated I thought I would try that later..

I had to catch up with the Movie

How to Add Local Repo with Data Services Job Server in LINUX

$
0
0

This is relevant if you are looking for adding Local repo with SAP Data Services Job server on Linux Box.

 

Below Commands and steps are performed on SAP Data Services 4.1.

 

Login Via Putty,

 

Putty_Login_screen01.JPG

 

Input :- Provide hostname and Hit Enter. Provide your credential so that you can access Linux.

 

Putty_Login_screen.JPG

 

Once you are logged in. navigate as instructed in following steps.

 

Go to Installation directory of Data Services on Linux box. Generally referred as %LINK_DIR% in all SAP documents

IF you logged in with named user, then check where you are, user PWD command on linux prompt once you successfully login to server.

$ cd $LINK_DIR/bin/

 

%LINK_DIR% = /usr/sap/<SID>/businessobjects/dataservices

$ cd /usr/sap/FDM/businessobjects/dataservices/bin

$. ./al_env.sh

$ ./svrcfg

Unable to load DLL or shared library <libds_crypto.so>. Please make sure the library is installed and located correctly.

 

If you see the above error then you do not have permission to perfrom the Operation. There are two otions as going forward. Either you request for permissions or you get the user which has the permission.So switch user will help you to do that. However, getting sudo access is the recommended way.

Generally SAPDS<SID> is the OS user have the permissions.

 

So I switiched the User and performed the same tasks. please see the scremm

-bash-4.1$ cd /usr/sap/FDM/businessobjects/dataservices/bin
-bash-4.1$ . ./al_env.sh
-bash-4.1$ ./svrcfg

** Data Services Server Manager Utility **

       1 : Control Job Service
       2 : Configure Job Server
       3 : Configure Runtime Resources
       4 : Configure Access Server
       5 : Configure SNMP Agent
       6 : Configure SMTP
       7 : Configure SSL
       8 : Configure Native Component Supportability
       x : Exit


Enter Option:

 

We have chosen Option 2, because we would lilke to register some of our repositories with Job Server.

If you have more than one Job Server then select accordingly. In this case it has only One Job server

Enter Option: 2
_____________________________________________________________________

                 Current Job Server Information
_____________________________________________________________________

S#  Job Server Name  TCP     Enable   Repository Information
                     Port     SNMP
--  ---------------  -----  --------  ----------------------------

1   JS_FDM_001       3500      N      ds_sh_lr@hlx.unix.com_RP_FDM_ds_sh_lr
                                      ds_sh_l2@hlx.unix.com_RP_FDM_ds_sh_l2
                                      ds_s2_lr@hlx.unix.com_RP_FDM_ds_s2_lr
                                      ds_s2_l1@hlx.unix.com_RP_FDM_ds_s2_l1
                                      ds_s2_l2@hlx.unix.com_RP_FDM_ds_s2_l2
                                      ds_s2_l3@hlx.unix.com_RP_FDM_ds_s2_l3
                                      ds_s2_l4@hlx.unix.com_RP_FDM_ds_s2_l4

_____________________________________________________________________

c : Create a new JOB SERVER entry       a : Add a REPO to job server
e : Edit a JOB SERVER entry             y : Resync a REPO
d : Delete a JOB SERVER entry           r : Remove a REPO from job server
u : Update REPO Password                s : Set default REPO
q : Quit

Enter Option:

 

I have chosen Option "a", as we like to register our Repo with Job Server.

You need to provide the relevant information about the Local Repo you like to register it with Data services Job Server.

Enter Option: a
Enter serial number of Job Server: 1
1)    Oracle
2)    MySQL
3)    DB2
4)    Sybase ASE
5)    SAP HANA
Enter the database type (1,2,3,4 or 5) for the associated repository: 3

Do you want to use data source (ODBC) 'Y|N'? [Y]: N

Enter the repository database server name: hlx.com

Enter the repository database port number: 50000

Enter the repository database name: RP_FDM

  1: DB2 UDB 9.x
Select the repository database version '1' [1]: 1

Enter the repository username: ds_s2_l5
Enter the repository password (not echoed):
Confirm the repository password (not echoed):

Passwords do not match!!! Please enter them again.
Enter the repository password (not echoed):
Confirm the repository password (not echoed):

S#  Job Server Name  TCP     Enable   Repository Information
                     Port     SNMP
--  ---------------  -----  --------  ----------------------------

1   JS_FDM_001       3500      N      ds_s2_l5@hlx.com_RP_FDM_ds_s2_l5

Is this information correct [Y/N]?

 

Once you are happy with the information you provided you can confirm it.

Is this information correct [Y/N]? Y

Updating the repository <ds_s2_l5@hlxd0bf004.unix.marksandspencerdev.com_RP_FDM_ds_s2_l5>.  Please wait...

 

Continue to Add/Modify/Delete Job Servers[Y/N]:

 

If you have more repositories to configure or add it with Job server then you may select "Y". As I like to to add more repo to Job Server.

If you are done you can happily select "N". And Job Done Go to Last Step.

 

Here in screen Below you can check and confirm that the repository you wew upto is now registered with the selected Job server.

Continue to Add/Modify/Delete Job Servers[Y/N]: Y
_____________________________________________________________________

                 Current Job Server Information
_____________________________________________________________________

S#  Job Server Name  TCP     Enable   Repository Information
                     Port     SNMP
--  ---------------  -----  --------  ----------------------------

1   JS_FDM_001       3500      N      ds_sh_lr@hlx.unix.com_RP_FDM_ds_sh_lr
                                      ds_sh_l2@hlx.unix.com_RP_FDM_ds_sh_l2
                                      ds_s2_lr@hlx.unix.com_RP_FDM_ds_s2_lr
                                      ds_s2_l1@hlx.unix.com_RP_FDM_ds_s2_l1
                                      ds_s2_l2@hlx.unix.com_RP_FDM_ds_s2_l2
                                      ds_s2_l3@hlx.unix.com_RP_FDM_ds_s2_l3
                                      ds_s2_l4@hlx.unix.com_RP_FDM_ds_s2_l4
                                      ds_s2_l5@hlx.unix.com_RP_FDM_ds_s2_l5

_____________________________________________________________________

c : Create a new JOB SERVER entry       a : Add a REPO to job server
e : Edit a JOB SERVER entry             y : Resync a REPO
d : Delete a JOB SERVER entry           r : Remove a REPO from job server
u : Update REPO Password                s : Set default REPO
q : Quit

Enter Option:

 

Now can repeat the steps if you like to add more Local repositories with Job Server.

 

Last step:-

 

Once you are done with the all the steps that you have performed above it will ask you the same details, if you say yes "N" then it will bring up the main menu where you can select Exit option. If all you want from Job Server is done.

Continue to Add/Modify/Delete Job Servers[Y/N]: N

** Data Services Server Manager Utility **

       1 : Control Job Service
       2 : Configure Job Server
       3 : Configure Runtime Resources
       4 : Configure Access Server
       5 : Configure SNMP Agent
       6 : Configure SMTP
       7 : Configure SSL
       8 : Configure Native Component Supportability
       x : Exit


Enter Option: x

 

************************************************************************END of the Article*********************************************************************************

 

1. If you are intersted to perform some other server Management (Data Services Application, Job Server) then you may see this article

How to Start/Stop Data services Server on LINUX

Data Services on Linux - Version check of Data Services Components

$
0
0
This article is relevant to the audience who are interested to know LINUX commands to check/see the versions of Data Services components installed in the server.
This article is written using SAP Data Services XI 4.1 SP02
Commands used in this article may be applicable for future releases of Data Services.
This article cover the following components of SAP Data Services

1. IPS (Information Platform Services)
2. Data Services Job Server
3. Data Services Engine
4. Data Services Server Manager
5. Data Services Local repository
6. Data Services Designer

There are multiple ways to check the version of software and its components. Each option is different and provides information at different level.
However, basic level of version check and accurate level of details can be easily obtained from commands.
Following are the options which can help you to determine the version of the component. Below options may be or may not be applicable to all the data services components.
1. Using CMC (Central Management Console)
2. Using LINUX Command Prompt
3. Using Installer itself.
4.using Logon PAD/Logon screen/Local Repository

Intension of this article is to focus on the Linux commands, however, wherever its possible alternate options will be discussed.

1. IPS (Information Platform Services)


Option :1 Using Commands

Input at prompt :-  Pls change the RED part or provide your path of installation if it is different for you.
$ CD /usr/sap/<SID>/businessobjects/sap_bobj/enterprise_xi40/linux_x64
Then Provide following command as input to get the version installed at patch level info.
$ strings boe_cmsd | grepBOBJVERSION

Once you are able to execute the above command you will be able to see the output as below. make sure you have proper authorizations to execute that command.
@(#)BOBJVERSION: 14.0.4.738boe_cmsd release 12/06/11 linux_x64 vclnxc39vm1

14.0 is the major release which is equivalent to XI 4.0.
third part of the version denotes the Support pack which is SP04 in this case
last part of the version depicts the Feature Pack level information which is FP738 in this case.

2. Data Services Job Server


Option :1 Using Commands

Input at the Prompt:-

cd $LINK_DIR/bin

Alternatively you can provide

Pls change the RED part or provide your path of installation if it is different for you.

$ cd /usr/sap/<SID>/businessobjects/dataservices/bin

Execute the following command to get the Data services Job Server version
$ al_Jobserver.sh -v
Output
SAP BusinessObjects Data Services Job Server Version 14.1.2.378
14.1 s the major release which is equivalent to XI 4.1
third part of the version denotes the Support pack which is SP02in this case
last part of the version depicts the Feature Pack level information which is FP378  this case.
Option :2 Using Local Repository via Data Services Designer
(Designer >> Menu >> About Data Services )
DataServices_Version_04_Job_Server.jpg

3. Data Services Engine

Option :1 Using Commands

Input at the Prompt:-

cd $LINK_DIR/bin

Alternatively you can provide

Pls change the RED part or provide your path of installation if it is different for you.

$ cd /usr/sap/<SID>/businessobjects/dataservices/bin

Execute the following command to get the Data services Engine Version

$ al_engine.sh -v

 

Output

 

SAP BusinessObjects Data Services Engine Version 14.1.2.378

14.1 s the major release which is equivalent to XI 4.1

third part of the version denotes the Support pack which is SP02in this case

last part of the version depicts the Feature Pack level information which is FP378  this case.

 

Option :2 Using Local Repository via Data Services Designer

 

(Designer >> Menu >> About Data Services )

 

DataServices_Version_05_Job_Engine.jpg

 

4. Data Services Server Manager

Option :1 Using Commands

 

Input at the Prompt:-

cd $LINK_DIR/bin

Alternatively you can provide

Pls change the RED part or provide your path of installation if it is different for you.

$ cd /usr/sap/<SID>/businessobjects/dataservices/bin

Execute the following command to get the Data services Engine Version

$ svrcfg -v

 

Output

 

SAP BusinessObjects Data Services Server Manager Version 14.1.2.378

 

14.1 s the major release which is equivalent to XI 4.1

third part of the version denotes the Support pack which is SP02in this case

last part of the version depicts the Feature Pack level information which is FP378  this case

5. Data Services Local repository

Input at the Prompt:-

cd $LINK_DIR/bin

Alternatively you can provide

Pls change the RED part or provide your path of installation if it is different for you.

$ cd /usr/sap/<SID>/businessobjects/dataservices/bin

Execute the following command to get the Data services repository version(if you have multiple repositories repeat the below step for each of the repository you like to check version for.
$ repoman -Uusername -Ppassword -Sservername -s -NDB2 -Qdatabasename -p50000 -VDB2 UDB 9.X -tlocal -v

Option :2 Using Local Repository via Data Services Designer

 

(Designer >> Menu >> About Data Services )

 

DataServices_Version_06_Repository.jpg

6. Data Services Designer

Option :1 Using Logon pad

DataServices_Version_01.JPG

Option :2 Using Help (Designer >> Menu >> About Data Services )

 

DataServices_Version_03.JPG

Viewing all 236 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>