Quantcast
Channel: Data Services and Data Quality
Viewing all 236 articles
Browse latest View live

Same table as Source and Target in a dataflow without table lock (Teradata) – Issue Solution:

$
0
0

Scenario:

 

Consider a scenario where we have to use the same teradata table as Source and Target in a single dataflow.

This amy sometimes causes a table lock in teradata database and the job will suspend without showing any progress.

 

1.PNG

Here the table bods_region is used as source & target which cause the job to suspend.

 

Resolution:

 

To avoid this issue, we can divide the main dataflow execution to sub dataflows. This can be achieved by adding a data transfer transform in the dataflow.

 

2.PNG

 

Here the Data_transfer (DT_Test table) transform added will divide the execution into multiple sub dataflows (which can be viewed in ‘Optimized SQL’ as in below)

 

3.PNG

4.PNG

  • First sub dataflow will join the source tables and load to DT_Test table.
  • Second sub dataflow will read from DT_Test to the target bods_region table.

 

This resolves the teradata table lock issue as after the first sub dataflow the lock on bods_region table will be released and so the 2nd sub dataflow will be able to load data to target successfully.

 

This resolution can be applied for all the scenarios wherever a lock happens for simultaneous read/write .


How to create "Full Outer Join" in SAP BODS

$
0
0

Picture1.jpg

Although this can be done directly by using the "SQL Transform" by providing the query for full-outer-join, but it is said that its not recommended due to performance reasons.

 

So the picture explains itself how to perform the same. We have two source tables, One Query transform contains the contents for left-outer-join and the another Query transform contains the contents for right-outer-join . Then the outputs from both Query transforms are merged (union-all) , and then we remove the duplicate rows by using another query transform (Query_2) and the output is directed to the output table (TEST_OUTPUT).

.

$
0
0

error.jpg

 

 

The error is simply because BODS is not getting the required handler for connecting to Oracle. The most frequent cause is the smaller value of the parameter "PROCESSES" set in Oracle, which needs to be increased in order to solve this issue. Kindly follow the below mentioned steps:

 

1) Open the SQL interface like SQL*Plus etc.

 

2) Login as a system DBA (conn sys as sysdba)

 

3) Enter the SQL statement > alter system set processes=200 scope=spfile

 

4) Except current sql*plus window , Close all other applications connecting to Oracle ( like BODS , SQLDeveloper etc)

 

5) Enter the SQL command > startup force

 

This will restart the oracle database and the newly entered value for processes parameter will come into effect.

SAP Data Insight

$
0
0

Data Insight

Data Insight is used to do the DHA (Data Health Assessment) on the data, to see if the data is good to use. We use the tool data Insight to do a test / profiling on the data before we use the data for the ETL process. We can also say that Insight is used to do the data investigation for DHA. It   automates the Analysis and Monitors the data.

 

Using Data Insight we can perform the following tasks


Data Profiling

Column query

Integrity test and Custom query

Scheduling

Creating a trend reports

Sampling reports

 

Getting started

Creating Connection


Navigation to data Insight


Note:- First we need to start the Data insight Engine before we use the tool.

 

To start the Data Insight Engine, follow the bellow navigation.

Start --> Program Files -->Business objects XI 3.0--> Business objects Data Insight --> Data Insight Engine

Insight1.png


Once you click on this, a Dos window will open and it will start the engine.

Once the Engine starts, go for the Data Insight GUI in the same above navigation.

Once your Data Insight starts, You will find the bellow screen.


Insight2.png

Now we need to create a Project

To create a Project go to the navigation

File-> New Project -> Give the project name and Check the box for Share Project.

Insight3.png

By sharing, we can make it accessible to the rest.

 

Now you have to provide the connection name.

Choose Data base and click on the Down arrow to specify the database connection.

Insight4.png


If using for the first time, give your SQL server name and click on OK, it opens the data link properties window.

Give in the server name and username and password. In step 3 select your SQL database on which you want to perform the test.

Click on Test connection to see if the credentials are correct. And click on OK.

Insight5.png

Now it will open the below window for selecting Owners. You can click on OK. Now the Insight Window is open and you can see the selected DB available. Expand the data base to see the tables under it. Go to the selected table and expand it.

Insight6.png

Insight7.png

Here we have 4 tabs (Data Profile, Column Query, Referential Integrity, Custom Query) using which we can perform different types of tests on the data.

Data Profile

Using this we can perform tests like Summary on the data, Comparison,  Frequency Test, Word Frequency test, Uniqueness of data, Redundancy test.

Summary will give the snap shot of the data for decision making or further drill-down.

How can we carry out the Summary test?

You can perform the summary on the table level or on a column level as well.  Select the Check box under the Summary column and click on RUN.

It will give you the below Summary Profile on the data which gives a complete DHA on the data.

It will give you the below Summary Profile on the data which gives a complete DHA on the data.

Insight12.png

You can check on Save report and click on close. Now it will ask you to save the profile  report. Click on Yes and give the Profile name and click on OK.

Insight9.png

Insight10.png

Now if you notice, the last run column is populated with the time stamp and the result. Click on the result next to the time stamp to see the results

Comparison test

Comparison is used to get the report of Count and percentages of rows with incomplete column values.

To do a comparison test, Click on the check box under comparison at the table level or the row level and click on RUN.Insight11.png


Insight12.png

Now you can observe the result and it gives the result of the match or duplicates records available. In our case we don’t have duplicates or match records.

You can also click on print report to generate the report and also can export the report to different formats by clicking on the export report.

Insight13.png

Insight14.png

Once you close this report and click on close in the main window, it will ask you to save the result and same as the above procedure we can save the results.

Frequency (FRQ) is used to find the frequency distribution of distinct values in columns.

The working procedure is same as the above. Click on the check box under the FRQ and click on RUN to see the results. You can also click on print report to export it in to different formats. You can also save the result  by checking save report and click on close and give the profile name.

Please see the following screen shots.

Insight15.png

Insight16.png

Insight17.png

WFRQ (Word frequency )Frequency distribution of single word.

Same as the above procedure, click on check box and click on run to see the results.

UNQ (Unique) This gives the count and percentages of the rows with non-unique column values.

Same as the above procedure, click on check box and click on run to see the results.

RDN (Redundancy ) This test is to identify the commonalities and outlives between the columns.

Same as the above procedure, click on check box and click on run to see the results.

Column Query :-  This is used to Analyze the data within the Data Insight.

  1. Select the column on which you want to perform the test and right click à add combined column query

We can perform the following test using the Combined column Query.

 

Insight18.png

Format


Occurrence Search for the occurrence (<, >, =,<=, >=) ‘n’ times

Pattern                                           Pattern of the data in the column

Pattern recognition                        Recognizing the string pattern with special chars

Range                                            specify the min and max values for the range

Reference column                         reference column on which we have to refer this column

Specific value test                         Search the column with a specific value


Select the Radio buttons on the left side and the respective selections will be activated on the right side.

Insight19.png

Once you select the query type on the left side, chose the respective options on the right side and click on return data check box and click on run,

In our example, we take the specific value test.

Select the specific values on left side and specify a value on the right hand side. Select the Return data check box and click on run. You will get the below result. You can click on print report to see the data in a report format, or you can click the check box save data and click on close. Click ok to save the report and give the report name and click on OK.

Happy Learning

Rakesh

Introduction, Artifacts and look and feel of BODS

$
0
0

SAP BO DATA Integrator / Data Services

 

 

Data services is integrated with SAP BI/SAP R3/SAP Applications and non SAP Ware house.

Purpose:- It does ETL via batch Job and online method through bulk and delta load processing of both structured and unstructured data to generate a Ware House (sap and Non-sap)

 

Data Services is the combination of Data Integrator and Data Quality. Previously these are separate tools like Data Integrator which is used to do the ETL part and Data Quality to do the data profiling and Data Cleansing.

Now with Data Services both DI and DQ are combined in to once interface so that it provides the complete solution (data integration and Quality) under one platform.

This even combines the separate job servers & Repositories of DI and DI in to one.

 

Data Federator: - The output of the data federator is the virtual data. Federator provides the data as input to the data services and using federator we can project data from multiple sources as a single source.

 

Data Services Scenarios:-

Source                                            Ware House

SQL         --           DS           --             DB

Flat File    --           DS           --             DB

Flat File    --           DS           --             BI

R/3           --           DS           --             BI

R/3           --           DS           --             DB

SQL         --           DS           --             BI

 

We can move the data from any source to any target DB using Data Services.

Data Services is an utility to do ETL process, It is not a warehouse , so it doesn’t stage any amount of data in it.

Data Services can create ETL process and can create a ware house (SAP / Non-Sap) .

 

DS is used majorly for 3 sort of projects

1)  

             Migration

2)          Ware house or DB building

3)          Data Quality

 

Data Profiling: - Pre processing of data before the ETL to check the health of the data. By profiling we check the health of the data if it’s good or bad.

 

Advantages of Data Services over SAP BI/BW ETL process

 

It’s a GUI based frame work

It has multiple data sources in built configuration

It has numerous inbuilt Transformations (Integrator, Quality, Platform)

It does data profiling activity

It easily adds external systems

It supports Export Execution Command to load the data in to the ware house via batch mode process

It generates ABAP code automatically

It recognizes Structure and un structures data

It can generate a ware house (sap / Non Sap)

It supports huge data cleansing/ Consolidation/ Transformation

It can do real time data load/ Full data load/ Incremental Data load

 

Data integrator / Services Architecture

 

intro1.png

No concept of Process chains/ DTP/ Info packages if you use the data services to load the data.

 

Data Integrator Components

 

Designer

intro2.png

It Creates the ETL Process

It has wide set of transformations

It includes all the artifacts of the project ( Work Flow, Data Flow, Data Store, Tables)

It is a gate way to do profiling

All the designer objects are reusable

 

 

 

Management Console (URL based tool / Web based tool)

 

intro3.png

It is used to configure the repositories

It allows us to configure user profiles to specific environment

It allows us to create users and user groups and assign the users to the user groups with privileges

It allows to auto schedule or execute the jobs

We can execute the jobs from any Proj-geographic location as this is a web based tool

It allows us to connect the repositories to Connections (Dev/ Qual / Prod)

It allows us to customize the data stores

 

Access Server

 

It is used to run the real time jobs

It gets the XML input (real time data)

XML inputs can be loaded to the Ware house using the Access server

It is responsible for the execution of online / real time jobs

 

Repository Manager

intro4.png

It allows us to create the Repositories (Local, Central, and Profiler)

Repositories are created on top of the standard database

Data Services system tables are available here

 

 

Job Server

 

This is the server which is responsible to execute the jobs. Without assigning the local / central repository we cannot execute the job.

 

Data Integrator Objects

 

Projects :-

 

Project is a folder where you store all the related jobs at once place. We can call it as a Folder to organize jobs.

 

Jobs:-

Jobs are the executable part of the Data Services. This job is present under the project.

 

Batch Job

Online jobs

 

Work Flows:-

This work flow acts a folder to contain the related Data Flows. This Work Flows are re-usable

 

Conditionals:-

Conditional contains Work Flows or data flows and these are controlled by script whether to trigger or not.

 

Scripts:-

Scripts are set of codes used to define or initialize the global variables, Control the flow of conditionals or control the flow of execution , to print some statements at the runtime and also to assign specific default values to the variables.

 

Data Flow:-

The actual data processing happens here.

 

Source Data Store:-

It is the place held to import the data from the data base/ sap to data services local repository

 

Target Data Store:-

It is the collection of dimensions and fact tables to create the data ware house.

 

Transformations:-

These are the query transformations that are used to carry out the ETL process. These are broadly categorized in to 3 (platform, Quality and integrator)

 

File Format :-

It contains various legacy system file formats

 

Variables:-

We can create and use the local and global variables and use them in the project. The variables starts with “$” Symbol.

 

Functions:-

We have numerous inbuilt functions like (String, math, lookup , enrich and so on)

 

Template Table:-

These are the temporary tables that are used to hold the intermediate data or the final data.

 

Data Store:-

These data stores acts a port from which you can define the connections to the source or the target systems. You can create multiple configurations in one data store to connect this to the different systems

 

ATL :-

ATL files are like the BIAR files. This is named after a company. ATL  doesn’t hold any full form like BIAR.

The Project/ Job/ Work Flow/ Data Flow/ Tables can be exported to ATL so that they can be moved between Dev -->Qual and from Qual-->Prod.

Similarly you can also import the Project/ Job/ Work Flow/ Data Flow/ Tables which are exported to ATL, back in to the data services

 

Thanks

Rakesh

Creating a Full secure Central repository

$
0
0

Can we create a complete Secure Central repository with Access restriction( Read) to the Developers and Full Access(Full) to the Admin??

 

Yes this is pretty much possible with the Secure Central Repository in BODS.

 

It pretty simple.

 

You will be having the bellow users created.

 

1) Admin

2) Developer 1, 2, .. n

 

Go to Management console --> Administration --> Central repository (you can see your secure central repositories available here).

 

Expand your central repo and you can See Users and Groups.

 

Create two Groups Eg:- Dev and Admin. You will have a default Group called DIGroup.

 

Now go to Users Tab and map your Admin to Admin Group and DI Group and Developers to only Dev Group.

 

By this you are restricting the Developers with read access and providing full permission to the Admin

 

Now Go to your designer and configure your CR and login with the Admin Credentials (username and pwd)

Go to your Data stores and do a Check out ( Objects & dependents )

 

The developers will configure the same CR and login to that with Developer credentials.

Now the Developers will have a read access and on top of that those Data stores are Checked out by the Admin.

 

This gives you a full Secure Central repository.

 

 

Thanks

Rakesh

Introduction to Operation-Codes & The Behavior of Map_Operation Transform when used individually.

$
0
0

Although it is not very complex transform, but let us go somewhat deeper into the basics of it and find out the basic needs of this transform. and a view of operation-codes for beginners.

 

The purpose of Map_Operation transform is to map/change the Operation-Code of incoming/input row to the desired Operation-Code/Operation to be performed on the target table.

 

Why We need it After all? : It might be the case that the incoming/input row needs to be updated in the target table ( because some data has been changed since the target was populated last time) , but instead of updating it in the Target_table, you are willing to insert a new row and keep the older one as well OR you might be willing to delete the changed row from the target table OR you might be willing to do nothing at all for the rows which are changed. Hence, the operation-codes are the arms which help doing all these things.

 

But what are these Operation-Codes? : Let us suppose you have one source table named "source_table" and one target table named "target_table" . Both have exactly same schema. Now , in the first run you populate the target_table with all the data in source_table , hence both tables have got exactly the same data as well. Now some changes are done on the Source_table, few new rows are added , few rows got updated , few rows are deleted . Now if we compare the Source_table with the Target_table and try to set the status of each row in the Source_table relative to the Target_table then we'll have one of the following statuses for each input row :

 

-The rows which are there in the Source_table and not in the Target_table, basically the new rows: These rows need to be inserted to the Target_table, hence the Operation-Code "insert" will be associated with these rows coming from the Source_table.

 

-The rows found in both tables, but are updated in the Source_Table : These rows need to be updated in the Target_table, hence the Operation-Code  "update" will be associated with these rows coming from the Source_table.

 

-The rows which are there in Target_Table and deleted from Source_Table after the last run. These rows need to deleted from the Target_table (although we hardly perform deletion in a datawarehouse), hence the Operation-Code "delete" will be associated with each row of this kind.

 

-The rows which are there in both of the tables. These rows ideally doesn't need any operation , hence the Operation-Code "normal" is associated with such rows.

 

Well, how to perform this comparison of Source and Target Tables? : This can be done by the Table_Comparison transform. It compares the input table (Source_table in our example) with the another table called as comparison_table in BODS jargon (Target_table in our recent example). and after comparing each row it associates an Operation-Code to each row of the input table. and if we choose , it also detects the rows which were deleted from input table (to choose whether we need to perform deletion on Target_table or not). But we are not going in the details of Table_Comparison transform here, as i was going to play with the map_operation transform alone, and i know it looks crazy to do so. Because , like in the figure given below, if i connect the Source Table directly to the Map_Operation transform, "by default" the operation-code associated with all rows is "normal", until we change it using table_comparison transform.

 

 

Picture1.jpg

 

Playing with Basics of Map_Operation Transform: So, as said earlier the job of map_operation transform is to change/map the incoming op-code (operation-code) to the desired/outgoing op-code. See the pic below we have options of "Input row type" and "Output row type". But, in the above mapping because we have connected the source directly to the map_operation transform, All Incoming Rows Have the Op-Code "normal" Associated with them. Hence, the second, third and fourth option doesn't matter for now because there are no input rows with operation-code "update" or "insert" or "delete" associated with them.

 

Picture2.jpg

 

So, let us see what will happen if we use one by one the provided options in the drop-down menu given for the "Output row type". The interesting ones are "update" and "delete". let us see why:

Picture3.jpg

First,  Let us suppose there is no primary-key defined on the target table :

 

-"normal" : If we choose "normal" as Output row type op-code, all input rows will be inserted to the target table.

 

-"insert" : same as "normal" , all input rows will be inserted to the target table.

 

-"discard": Nothing will be done.

 

-"delete": It'll delete the matching rows from the target table. Even if the primary key is not defined on target , delete operation is performed by matching the entire row. i.e. Value of each field in the input row matched with Value of each field in the Target table row, if matched the target row is deleted. No need of a Key column for delete operation.

 

-"update": This time, when no primary key is defined on the target , the update operation will not do anything. This is actually logical, I have an input row with a key field (say EmpNo) , but how can i ask my software to update the this row in target table if i haven't mentioned the same key in the target table as well? How my software will find the row which needs to be updated ? If i say , match the entire row , then it'll find the exact match of entire row if found, and that would mean that there is nothing to be updated as the entire row matches. So, i need to mention something (some column) in the target using which i can find the row to be updated.

 

So, there are two ways to do this:

First is to define a Primary key on the target table and re-import the table.

Second(assuming that input table has a primary key defined) is open the table by double-clicking on it, in the option tab we have the option to "Use Input keys" as shown below. By choosing yes here, It'll use the Primary key mentioned in the input table, provided that the column with same name and datatype is present in the Target table as well.

 

Picture4.jpg

 

Secondly If a primary key is defined on the Target table as well then the "normal" and "insert" operations will fail if a row with same primary-key value will be tried to be inserted again, the job will fail and stop.Whereas the "update" and "delete" operations would work as before.

 

The behavior of map_operation transform changes somewhat when it is preceded by the table_comparison transform. The need of primary key on the target Or mentioning to use the input keys (as shown above) eliminates, It updates the target rows without it as well. Might be because we mention the primary key column of the input table in the table_comparison transform along with the comparison columns, and might be the rows carry this information along with the associated op-code to the map_operation transform. I checked and experimented this practically but can just guess about the internal engineering of it.

Web Service Consumption in Data Services Designer from Net Weaver

$
0
0

Has anyone successfully Consumed an authenticated Web Service in Data Services Designer from Net Weaver?


How to import XML schema

$
0
0

  Below are the Steps to import the XSD file in BODS 

  • Open the local object library and go to the Formats tab.
  • Right-click XML Schema and click New. The Import XML schema format dialog box opens.

Import2.JPG

  • In the Format Name box, name the XML Schema.
  • For the File name/URL, click Browse to navigate to the XML schema file and open it.
  • For Namespace, click on the drop down to select the namespace .

Import1.JPG

  • In the Root element name list, click on the root element.
  • Click OK.
  • The XML schema is imported and we can see it in the local object library.

import7.jpg

 

 

Some tips to avoid XML parse errors at run-time:

  • The order of the elements in the XML file should have the same order as in XSD.
  • All mandatory fields specified in the XSD must be available in XML File
  • The datatype of the elements in the XML file must match with the specification in XSD

BODS - SCD2 Terradata lock issue Resolved

$
0
0

Problem:

 

While developing the SCD2 data flow using the Teradata tables a lock happened between the updates and the reads of the Teradata target table and the job was hanging for long time.

 

 

SCD2

The goal of a slow changing dimension of type two is to keep the old versions of records and justinsert the new ones.

 

 

Solution:

 

If we follow the normal SCD2 conventional method explained above using Teradata tables the job will hang with dead lock. Because the TC is pointing to the same Teradata target, the incoming records are trying to compare with the Teradata target table and trying to insert/update in the same target table. Due to this comparison and the manipulation in same table, the Teradata issues a dead lock and the job hangs for long time.

 

 

Normal Flow:

Source Table---> Query --> TC--> HP --> KG --> Target Tab

 

 

Solution:

 

create a view over the target table using the locking row for access method as mentioned below.

 

 

CREATE VIEW TABLE_VW 

AS LOCKINGROW FORACCESS

SELECT * FROM TABLE  WHERE EFF_STAT = 'A';

 

 

Then use the view in  TC for comparing the target records. Now we have same records for comparing and manipulation but in different objects as a view in TC and as a table in target.

 

LOCKING ROW FOR ACCESS used in the view allows the dirty reads from the table and allows INSERT/UPDATE/DELETE operations on the table. Thus we can read the records from the same table as a view and INSERT/UPDATE/DELETE in the same table as table itself.

1.png

New Flow:

Source Table---> Query --> TC (View)--> HP --> KG --> Target Tab

Selecting only alphanumeric data

$
0
0

Use the below regular expression function to select onlt the alphanumeric data from the source

 

match_regex(SQL_Latest_Employees.HRRA01_LAST_N , '[0-9A-Za-z]*',NULL ) = 1

Meet ASUG TechEd Speaker Emil - Kellogg's Data How to Deliver One Version of the Truth

$
0
0

ASUGSapTech_Logo.jpg

As part of our ASUG / TechEd "Meet the Speaker" blog series, I am honored to introduce ASUG TechEd speaker Emil Beloglavec of Kellogg's Company of Battle Creek, Michigan.  Emil is presenting at SAP TechEd Las Vegas in October.

 

I spoke with Emil last month and he is great at sharing his knowledge of SAP solutions and tools.  His session is RDP 143 Kellogg’s Data – How to Deliver One Version of the Truth?

 

EB.jpg

 

Q: What will attendees learn at your session?

 

 

A: Kellogg is stepping up towards new technologies, for example: BW to HANA, CRM on HANA, BPC on HANA, and mining big data in form of structural (POS) or textual form (Hadoop). We strongly believe that first foundations have to be laid; one of them is definitely to achieve data quality and data harmonization across SAP and non-SAP systems. The presentation is a show case of Kellogg’s path taken so far and the possibilities ahead of us. Its main goal is to encourage a discussion in the spirit of knowledge exchange.

 

 

Q: How did you get your start with SAP?

 

 

A: Due to my previous experience with Data Warehousing ( Oracle and Cognos ), I started with SAP BW when I started with Kellogg about 8 years ago. For the past three years I work with SAP BusinessObjects Data Services.

 

 

Q: What is your favorite hobby?

 

A: Listening to classical music, especially to composers like Rachmaninov, Tchaikovsky, Chopin, Liszt, Saint-Saens, Grieg, Beethoven and the list could go on and on.

 

 

Here is hisAbstract:

Kellogg began building a new SAP ERP/SAP NetWeaver Business Warehouse environment, replacing one of its three SAP instances four years ago. SAP Data Insight and SAP Data Services were selected to assess the quality of data and convert data from legacy SAP/non-SAP systems into the new environment. This project is in its final stage of deployment and they are moving into sustainment phase. One of the driving factors for this project was to provide accurate data so that the business obtains the right answers at the right time. The presenters of this session will share lessons learned about assessing data, conversion, dual maintenance, generations of tools involved, a vision of the future, and how timely delivery of accurate data can support it.

 

 

Check out Emil's presentation from ASUG 2012 Annual Conference presentation slides - 2 Go-Lives 0 Data Defects with data migrations - that is a great achievement.

 

Add Emil's SAP TechEd session to your favorite list today and meet him in person in Las Vegas in October.

 

Related Links:

Beer and Analytics by Alexandre Papagiannidis Rivet

Workflow Approval Anywhere, Anytime - Meet the Speaker Graham Robinson - ASUG SAP TechEd session

Meet the ASUG TechEd Speaker Dennis Scoville - Empowered Self-Service BI with SAP HANA and SAP Lumira

Poll: What Version of Data Services are you using? by Ina Felsheim

Great TechEd pre-conference session: Before you Start, Make a Plan by Ina Felsheim

ASUG TechEd Pre-Conference Sessions

How do i load the data through IDOC in a single run of bods job

$
0
0

Hi ,

 

We have a scenario  that  need to extract the vendor data from People Soft to SAP by using BODS. We are using IDOC standard job to load into SAP.

In these loads, we have 2 fields Alternate Payee (LNRZB) and Fiscal Address Vendors ( FISKN) which should be updated with the SAP Vendor ids which have alternate payees and fiscal address respectively, but before loading into SAP we won't be having SAP IDs, so is there anyway we can load data for these 2 fields without running Job twice?? ( first time we load all the records into SAP by keeping these as null and then next time we have to update these fields)

 

In Informatica, we have dynamic lookup which gets data at run time into lookup table from target table, is there any such thing available in SAP BODS ??

Steps for executing BODS job from Unix Script with user defined global parameters

$
0
0

Steps for executing BODS job from Unix Script

 

 

 

This will help you understand how to change the global parameters used in the job during execution of the job invoked via a Unix Script.

 

While you export the .SH for job execution, the default parameter values or last used parameter value will be attached within the .SH file. Whenever you execute that .SH file, the job starts with the same parameters all the time. You may need to modify the .SH file all the time, whenever you need to make changes in the user parameter (Global parameter) values. Go through the simple steps involved in resolving this issue effectivelty with minor modifications and a simple unix script to pass new user defined values and execution of the BODS job.

 

Log in to Data Service Management Console

Go to Administrator-> Batch (Choose Repository)

                         pic1.jpg

  Click on Batch Job Configuration tab to choose  the job which needs to be invoked through Unix

 

                         pic2.jpg

 

  Click on Export Execution Command Option against the job

                         pic3.jpg

 

 

Click on Export.

 

 

Two Files then will get generated and placed in the Unix Box. **

 

One .TXT file named as “Reponame.Txt” in /proj/sap/SBOP_INFO_PLAT_SVCS_40_LNX64/dataservices/conf

 

One .sh file named as “jobname.sh” in /proj/sap/SBOP_INFO_PLAT_SVCS_40_LNX64/dataservices/log

 

**Location will be changed according to the setup

 

pic4.jpg

 

1. For a job with no user entry parameters required, we can directly call the .sh file generated for job execution.

 

. ./Job_Name.sh

 

2. For a job which has parameters the script
will look like this

 

/proj/sap/SBOP_INFO_PLAT_SVCS_40_LNX64/dataservices/bin/AL_RWJobLauncher "/proj/sap/SBOP_INFO_PLAT_SVCS_40_LNX64/dataservices/log/DEV_JS_1/"
-w "inet:acas183.fmq.abcd.com:3500" " -PLocaleUTF8 -R\"REPO_NAME.txt\"  -G"1142378d_784a_45cd_94d7_4a8411a9441b"
-r1000 -T14 -LocaleGV -GV\"\$Character_123=MqreatwvssQ;\$Integer_One=Qdasgsssrdd;\"  -GV\"DMuMDEn;\"    -CtBatch -Cmacas183.fmq.abcd.com -CaAdministrator -Cjacas183.fmq.abcd.com -Cp3500 "

 

The highlighted items areparameters default values provided in the job. While executing the job , if the user wants to change this default values to the user defined entries, we have to make the following changes in the script.

 

/proj/sap/SBOP_INFO_PLAT_SVCS_40_LNX64/dataservices/bin/AL_RWJobLauncher /proj/sap/SBOP_INFO_PLAT_SVCS_40_LNX64/dataservices/log/DEV_JS_1/"

-w "inet:acas183.fmq.abcd.com:3500" " -PLocaleUTF8 -R\"REPO_NAME.txt\"  -G"1142378d_784a_45cd_94d7_4a8411a9441b"

-r1000 -T14 -LocaleGV -GV\"\$Character_123=$1;\$Integer_One=$2;\"  -GV\"DMuMDEn;\"    -CtBatch -Cmacas183.fmq.abcd.com -CaAdministrator -Cjacas183.fmq.abcd.com -Cp3500 "

 

Where $1 and $2 are the parameters passed by the user replacing the default values.

 

Thus the job should execute in the following way

 

. ./Job_Name.sh $1 $2

 

Areas of difficulty.

 

The user entries should feed to the script as encrypted data. For this encryption, the value should be encrypted using AL_Encrypt utility.

 

That means if user need to pass an integer value 10 for a parameter variable say $Integer_One in the job, then he cannot use $Integer_One=10; instead he has to pass“MQ” which is the result of the utility AL_Encrypt

 

Al_Encrypt 10;

 

Result : - MQ

 

I have created a Script that can resolve the issue to a very good extend.

 

Logic of the Custom Script

 

Name of the Script: Unix_JOB_CALL.sh

 

Pre-Requisite:- Keep a parameter file (Keep entries line by line in the parameter order under a name “Job_Parm_File”)

(As we have different scripts for different jobs, we can keep different param files as well. Whenever there is a change invalue needed, user can simply go and modify the value without changing the order of parameter)

 

Sample Script Code

 

rm -f Unix_Param_Temp_File;

touch Unix_Param_Temp_File;

chmod 777 Unix_Param_Temp_File;

cat Job_Parm_File | while read a

  do

     AL_Encrypt $a >> Unix_Param_Temp_File

  done

JOB_PARAMS=`tr '\n' ' '<Unix_Param_Temp_File`

. ./Test_Unix.sh $JOB_PARAMS

rm Unix_Param_Temp_File;

 

 

Through this you can execute the job with user defined parameter values using unix.

DS : Things In & Out

$
0
0

Just to have fun with DS..

  • If you want to stop staring at the trace log and wait for it to show "JOB COMPLETED SUCCESSFULLY".

        You can use an easy aid in DS..

 

         Go to Tools --> Options...
         Then break out Designer and click on General.

         Then click the box that says: Show Dialog when job is completed.

                                    DS_jb_comp.jpg
        Now whenever your job completes, you'll get a little dialog box popping up to let you know.

                                                 ds_jb_1.jpg

 

  • One of the annoying defaults in Data Services is that all the names in Workflows or Dataflows are cut off after 17 characters.

 

          1.jpg                   2.jpg

            So to fix this go to Tools -> Options

            Then break out Designer and click on General.

            Where it says: Number of characters in workspace name. Change the number 17 to 100.
            Click OK when it asks you if you want to overwrite the job server parameters.

                                   3.jpg

Njjoy...


Idoc Status Simplified

$
0
0

 

Hi this blog is meant to help the ABAP programmers and Developers with all the status messages, while posting an IDOC in SAP.

 

Also I mentioned the error reason and possible solution.

Hope this is helpful to all.

 

Sequence of Inbound and Outbound statuses

 

Starting statuses may be: 01 (outbound), 50 (inbound), 42 (outbound test), 74 (inbound test)

 

Status number Type of status Status description Next success status Next error status Error reason Solution to error
1Success Outbound IDoc created 3029
2Error Error passing data to port Correct the error and Execute RSEOUT00 program again
3Success Outbound IDoc successfully sent to port None, 32
4Error within control information on EDI subsystem
5Error during translation
12Success Dispatch OK Changed from status 03 by BD75 transaction (see below)
25Success Processing outbound IDoc despite syntax errors
26Error during syntax check of outbound IDoc Missing mandatory segment for example You may edit the IDoc or force it to be processed
29Error ALE service (for example 29, 31
30Success Outbound IDoc ready for dispatch (ALE service) 32Partner profile customized to not run Execute RSEOUT00 program
31Error no further processing
32Success Outbound IDoc was edited There was a manual update of the IDoc in SAP tables, the original was saved to a new IDoc with status 33
33Success Original of an IDoc which was edited. It is not possible to post this IDoc None None Backup of another IDoc manually updated, see status 32
35Success IDoc reloaded from archive. Can't be processed
37Error Erroneous control record (for example, "reference" field should be blank for outbound IDocs) None, 37
42Success Outbound IDoc manually created by WE19 test tool 137
50Success Inbound IDoc created 6465
51Error inbound IDoc data contains errors 53, 64 51, 66, 68, 69 Error triggered by SAP application, incorrect values in the IDoc data Ask functional people, modify erroneous values in the IDoc (WE02 for example) and run it again using BD87
53Success inbound IDoc posted None, 53
56Error IDoc with errors added (You should never see this error code) 50, 51, 56, 62, 68
60Error syntax check of inbound IDoc 56, 61, 62
61Error Processing inbound IDoc despite syntax error 64
62Success inbound IDoc passed to application 5351
63Error passing IDoc to application
64Success Inbound IDoc ready to be passed to application 6251, 60, 63, 68, 69 Execute BD20 transaction (RBDAPP01 program)
65Error ALE service - incorrect partner profiles 64, 65
66Waiting Waiting for predecessor IDoc (Serialization) 51
68Success no further processing 68None The IDoc was created using inbound test tool (WE19) and written to file to do file inbound test. Another IDoc is created if immediate processing is chosen
69Success IDoc was edited 6451, 68, 69 There was a manual update of the IDoc in SAP tables, the original was saved to a new IDoc with status 70
70Success Original of an IDoc which was edited. It is not possible to post this IDoc None None Backup of another IDoc manually updated, see status 69
71Success Inbound IDoc reloaded from archive. Can't be processed
74Success Inbound IDoc manually created by WE19 test tool 50, 56


 

Thanks,

Mayank Mehta

Adding a second to date

$
0
0

Hi All,

 

I found this interesting in share with you all. There was a requirement where because of timestamp of a date column value, BODS job was getting fail and we had to go update the date value to ADD one second to it. Later the job got completed successfully. Here is the below query for your reference.

 

select TO_CHAR(sysdate, 'DD-MON-YYYY HH:MI:SS AM') NOW, TO_CHAR(sysdate+1/(24*60*60),'DD-MON-YYYY HH:MI:SS AM') NOW_PLUS_1_SEC from dual;

 

Hope this is helpful.

 

Thanks,

Abdulrasheed.

Issue with running the samples

$
0
0

hi All,

i am facing an issue while Running the .net sample Projects with Visual studio 2010 (Windows 7 64 bit Operating System)

 

"Could not load file or assembly 'dotnet_emdq, Version=1.0.4496.42818, Culture=neutral, PublicKeyToken=null' or one of its dependencies. An attempt was made to load a program with an incorrect format."

 

 

 

Thanks,

sravan g

Using BODS Metadata tables to find where SYSDATE is hardcoded

$
0
0

Hi All,

 

Here is a query to find where SYSDATE is hard coded in all BODS ETL jobs.

 

SELECT DISTINCT parent_obj

   FROM di_repo.al_parent_child,

  di_repo.al_langtext

  WHERE parent_obj_key=parent_objid

  and parent_obj_type<>'DataFlow'

AND parent_obj NOT LIKE 'Copy_%'

AND regexp_count(upper(text_value),'(SYSDATE)')>0;

 

 

This will help in identifying the jobs having SYSDATE hard coded and can change the code to make it as TABLE driven.

Create a function to call this TABLE and then pass parameters to it get the value and that value can passed to a Global variable in the BODS Script.

As you may know GV can be reused in many places.

 

Hope this will be helpful.

 

Thanks,

Abdulrasheed.

Data Migration to Cloud Solutions from SAP with SAP Rapid-Deployment Solutions - ASUG Webcast

$
0
0

This past week SAP provided this data migration webcast to ASUG.

 

Abstract provided by SAP:

 

In this session we learned about the Rapid Data Migration package, which is combining SAP’s Rapid Deployment Solutions (RDS) with SAP Data Services and SAP Information Steward software.   Ready and pre-built for SAP’s cloud solutions Customer on Demand and SuccessFactors Employee Central, it provides a standardized and successful approach to planning, designing, and implementing data migration projects.

1fig.jpg

Figure 1: Source: SAP

 

Figure 1 shows SAP cloud solutions.    SuccessFactors and Ariba were through acquisitions

2fig.jpg

 

Figure 2: Source: SAP

 

Data migration is a challenge for all companies, but there are differences with cloud.

 

Data migration projects can be budget busters and run over time and budget.

 

Move to cloud for low predictable cost of ownership.

3fig.jpg

Figure 3: Source: SAP

 

If move to the cloud, you need to perform data migration for Successfactors/Employee Central

 

SAP’s Data Migration solution includes Cloud for Customer and migration for SuccessFactors Employee Central.

 

It eases the process of moving into cloud applications – migration from on premise to cloud.  It includes pre-built content.  It corrects, validates to provide usable data.

 

RDS

4fig.jpg

Figure 4: Source: SAP

 

SAP’s desire to make implementation easier for customers.

 

It includes software from SAP, special content to accelerate project management as well as the specific pace and service offerings (fixed price, fixed scope) – SAP point solutions.

 

Content, in the case of data migration, includes pre-built validation for back end SAP and migration content for a wide range of business objects in back end and toolset for data enrichment.

 

 

Whiteboard

6fig.jpg

Figure 5: Source: SAP

 

Figure 5 shows a whiteboard to get to the SAP Cloud – SuccessFactors

 

You use Information Steward to standardize name

 

You use reporting to see how data migration is going and to report data quality

 

See more information at service.sap.com/rds-dm2cloud (SMP logon required)

 

Load using SAP methodology, and use Information Steward, Data Services, reporting via BI tools

 

6afig.jpg

Figure 6: Source: SAP

 

Successfactors changes from USA to United States

 

SF expects “California” rather than CA

 

Data Services is used for extract, transform and load – for data migration go from source to target

 

7fig.jpg

Figure 7: Source: SAP

 

Figure 7 shows the scope of moving from HCM to SF Employee Central

 

8fig.jpg

Figure 8: Source: SAP

 

Figure 8 shows the scope of Cloud for Customer migration objects.

 

9fig.jpg

Figure 9: Source: SAP

 

Figure 9 shows the solution is fixed price, fixed scope work by SAP or qualified partners.

 

Data migration starter service includes knowledge transfer, medium complexity and enables the in-house team

 

Level of skill with Data Services varies from customers to customer

 

With data migration to cloud has more options

 

If own Data Services you can download RDS solution and then engage SAP or partners

 

Other option – if not own Data Services and but want data migration – SAP cloud organization can provide migration in a hosted data services environment  - software is not on-premise

 

 

 

 

Related Links:

Meet ASUG TechEd Speaker Emil - Kellogg's Data How to Deliver One Version of the Truth

Data Migration Education Class: TZIM4M

https://training.sap.com/us/en/course/tzim4m-data-migration-with-sap-data-services-classroom-096-us-en/

 

Data Migration on service marketplace

http://service.sap.com/rds  – Data migration packages are under Enterprise Information Management

 

Recorded Demo of SAP rapid data migration to cloud solutions:

http://service.sap.com/rds-dm2cloud

 

Blogs on data migration

http://scn.sap.com/people/frank.densborn/blog/2012/05/25/a-better-way-to-migrate-your-sap-data-with-rapid-deploymentsolutions

Viewing all 236 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>