Quantcast
Channel: Data Services and Data Quality
Viewing all 236 articles
Browse latest View live

Advantage of Join Ranks in BODS

$
0
0

What is Join Rank?

 

You can use join rank to control the order in which sources (tables or files) are joined in a dataflow. The highest ranked source is accessed first to construct the join.

 

Best Practices for Join Ranks:

  • Define the join rank in the Query editor.
  • For an inner join between two tables, in the Query editor assign a higher join rank value to the larger table and, if possible, cache the smaller table.


Performance Improvement:


Controlling join order can often have a huge effect on the performance of producing the join result. Join ordering is relevant only in cases where the Data Services engine performs the join. In cases where the code is pushed down to the database, the database server determines how a join is performed.

 

Where Join Rank to be used?


When code is not full push down and sources are with huge records then join rank may be considered. Join rank plays important role in performance optimization as in such cases DS engine performs the join.  The Data Services Optimizer considers join rank and uses the source with the highest join rank as the left source. 

 

You can print a trace message to the Monitor log file which allows you to see the order in which the Data Services Optimizer performs the joins. This information may help you to identify ways to improve the performance. To add the trace, select Optimized Data Flow in the Trace tab of the "Execution Properties" dialog.

 

Article shall continue with a real time example on Join Rank soon. 


How to capture error log in a table in BODS

$
0
0

I will be walking you through (step by step procedure) how we can capture error messages if any dataflow fails in a Job. I have taken a simple example with few columns to demonstrate.


Step 1: Create a Job and name it as ‘ERROR_LOG_JOB’


Step 2: Declare following four global variables at the Job level. Refer the screen shot below for Name and data types.

GV.png

Step 3: Drag a Try Block, Dataflow and Catch block in work area and connect them as shown in diagram below. Inside dataflow you can drag any existing table in your repository as a source and populate few columns to a target table. Make sure target table is a permanent table. This is just for demo.

Job.png


Step 4: Open the Catch block and Drag one script inside Catch Block and name it as shown in below diagram.

Catch Inside.png


Step 5: Open the scrip and write below code inside as shown in the diagram below.

Script Msg.png


The above script is to populate the values in global variables using some in-built BODS functions as well as calling a custom function to log the errors into a permanent table. This function does not exits at this moment. We will be creating this function in later steps.


Step 6: Go to Custom Function section in your repository and create a new custom function and name it as under.


Fun sec.png


Step 7: Click next in above dialog box and write the below code inside the function. You need to declare parameters and local variables as shown in the editor below. Keep the datatypes of these parameters and local variables what we have for global variables in setp 2. Validate the function and save it.


Fun.png


Step 8: Now your function is ready to use. Considering that you have SQL Server as a database where you want to capture these errors in a table. Create a table to store the information.

 

CREATETABLE [dbo].[ERROR_LOG](

      [SEQ_NO] [int] IDENTITY(1,1)NOTNULL,

      [ERROR_NUMBER] [int] NULL,

      [ERROR_CONTEXT] [varchar](512)NULL,

      [ERROR_MESSAGE] [varchar](512)NULL,

      [ERROR_TIMESTAMP] [VARCHAR] (512)NULL

)

 

You may change the datastore as per your requirement. I have taken ETL_CTRL as a datastore in above function which is connected to a SQL Server Database where above table is being created.

 

Step 9: Just to make sure that dataflow is failing, we will be forcing it to throw an error at run time. Inside your dataflow use permanent target table. Now double click the target table and add one text line below existing comment under load triggers tab. Refer below screen shot. This is one way to throw an error in a dataflow at run time.


Tbl Error.png


Step 10: Now your Job is ready to execute. Save and Execute your Job. You should get an error message  monitor log. Open the table in your database and check if error log information is populated. Error Log shall look like as shown below.

Monitor Log.png


ERROR_LOG table shall capture the same error message in a table as under.


Error Result.png


Hope this helps. In case you face any issue, do let me know.

Custom function in BODS to remove special characters from a string

$
0
0

Below is step by step procedure to write a custom function in BODS to remove special characters in a string using ASCII values.

 

Step 1: Create a custom function in BODS and name it as 'CF_REMOVE_SPECIAL_CHARS'

 

Step 2: Use the below code in your function.

 

fun.png

 

Step 3: Declare Parameters and local variables as shown in left pane of the above function editor.

 

$input_field - parameter type is input (data type varchar(255) )

$L_Char - datatype varchar(255)
$L_Counter - datatype int
$L_String - datatype varchar(255)
$L_String_final - datatype varchar(255)
$L_String_Length - datatype int


Step 4: Save this function.

 

Step 5: Call this function while mapping any field in Query Editor where you want to remove special characters.

 

Ex: CF_REMOVE_SPECIAL_CHARS(Table1.INPUT_VAL)

 

Above function call shall remove all special characters from INPUT_VAL field in a table1 and output Value shall look like below table's data.

 

TBL.png

Custom function to get database name from a datastore in DS

$
0
0

Getting function name from a datastore could be useful if you have configured more than one datastores in your project. This helps avoiding code change while migrating objects from one environment to another during the code promotion. 

 

Below is step by step procedure to create custom function to get the database name from a datastore and call it in a Batch Job.

 

1) Go to your Local Object Library and choose Custom Functions then right click on Custom Function and select New


2) Enter name of the function as ‘CF_DATABASE_NAME


3) Enter the below line of code inside the editor.

 

Fn.png

Click on the above image to zoom it.

 

Then declare an input parameter named $P_Datastore of length varchar(64). Then Validate the function and if no error found then Save it.


4) Create a global variable at Job level in any of the test batch job you have created and name this global variable as $G_Database of length varchar(64)


5) Call this function in one script of your batch job and use this Global variable wherever required in your Job.


You can call this function as under in script. You simply need to pass the name of your datastore in single quotes.

$G_Database=CF_DATABASE_NAME('datastore_name');


Example of a practical use:


sql('CONV_DS',' TRUNCATE TABLE ||$G_DATABASE||'.ADMIN.CONV_CUSTOMER_ADDRESS');


So the above code will read database name at run time from global variable and truncate records before it loads in the Job. This code can be used in any environment when you promote your code else if database name is hard coded then you will end up updating the code in every new environment.

How to use Pre-Load and Post-Load command in Data Services.

$
0
0

In this article we will discuss How to use Pre-Load and Post-Load command in data services.

 

Business Requirement:  First need to execute command/program which will create space for transformation then do the data transform after that again executes another program which will publish the target data.

 

What is Pre-Load and Post Load?

 

Specify SQL commands that the software executes before starting a load or after finishing a load. When a data flow is called, the software opens all the objects (queries, transforms, sources, and targets) in the data flow. Next, the software executes Pre-Load SQL commands before processing any transform. Post-Load command will process after transform.

 

How to use for our business requirement?

 

We can use both Pre-Load and Post-Load command to execute program before and after transform, below the steps will explain in details

 

Right click on target object in Dataflow and press open

1.png

The Target object option will be shown as below

1-a.PNG

Both the Pre Load Commands tab and the Post Load Commands tab contain a SQL Commands box and a Value box. The SQL Commands box contains command lines. To edit/write a line, select the line in the SQL Commands box. The text for the SQL command appears in the Value box. Edit the text in that box.

2.PNG

 

To add a new line, determine the desired position for the new line, select the existing line immediately before or after the desired position, right-click, and choose Insert Before to insert a new line before the selected line, or choose Insert After to insert a new line after the selected line. Finally, type the SQL command in the Value box. You can include variables and parameters in pre-load or post-load SQL statements. Put the variables and parameters in either brackets, braces, or quotes.

1-b.PNG

1-C.PNG

To delete a line, select the line in the SQL Commands box, right click, and choose Delete.

1-D.PNG

Open Post-Load Tab and write post transformation command as same ad Pre-Load

1-E.PNG

Save and execute. The job will execute Pre-Load, Transform and Post-Load in a sequence.

 

1-f.PNG

Data processing successfully completed as per Business requirement.

 

Note:

Because the software executes the SQL commands as a unit of transaction, you should not include transaction commands in Pre-Load or Post-Load SQL statements.

My Template Table Became 'Permanent', and I Want to Change it Back

$
0
0

On our project, the innocent actions of a team member resulted in my template table being converted into a permanent one, and since it was used in many data flows, I couldn't just right-click/delete it away. Instead, I searched for online help, but found none but experimented with the following steps, which worked for me:

 

1-      Check out the permanent table. From the ‘Central Object Library’ window, right-click, check out.

2-      In Designer, in the datastore area of your ‘Local Object Library’, right-click, delete the permanent table you just checked out.

3-      Now, remove the same permanent table from your data flow in Designer, and replace it with a new template table that you’ll create with the exact same name and db (TRANS, STAGE, etc) as the permanent table had. The name is very important, it must be exactly the same. Exactly.

4-      Save, check, test your data flow.

5-      Check in the permanent table. From the ‘Central Object Library’ window, right-click, check in. You should see the permanent table disappear from the ‘Tables’ list, with the table instead appearing in the ‘Template Tables’ list.

JMS Real-Time integration with SAP Data Services

$
0
0

Purpose

This how-to guide shows how to integrate a Java  Messaging Services (JMS) Provider with SAP Data Services. This is a common Enterprise Application Integration scenario where a service is called asynchronously via request/response messages.  SAP Data Services' role here is to provide a simple Real-Time service. Configuration includes quite a few steps to get everything up and running. This step-by-step configuration example covers all components that need to be touched including the JMS provider.

 

Overview

We want an external information resource (IR) – our JMS provider - to initiate a request by putting a request message into a request queue.  SAP Data Services is the JMS client that waits for request messages, executes a service and puts a correlated response message into a response queue.  We’re using the pre-built JMS adapter in SAP Data Services 4.2 and use Active MQ as JMS provider. Since we focus on Real-Time integration we’re not using an adapter datastore in this scenario. All incoming and outgoing data is received/sent back via messages. We will configure a Real-Time Job, check the settings of the Job Server and Access Server, configure a Real-Time Service, install Active MQ and configure the message queues, configure the JMS adapter and its operation and finally send test messages from the Active MQ console.

Real-Time Job
For our service we’re using a “Hello Word”-Real-Time Job named Job_TestConnectivity. For details, please refer to the  SAP Data Services 4.2 tutorial Chapter 14 . SAP Data Services comes with all the ATL, DTD and XML files in <DS_LINK_DIR>/ConnectivityTest to create  Job_TestConnectivity. The job reads an input message that has one input string…

 

 

…and returns an output message that has one output string with the first two words of the input string in reverse order:

 


Job Server

We need to make sure that one JobServer supports adapters. Using Data Services Server Manager utility, we switch on  “support adapter, message broker communication” and “Use SSL protocol for adapter, message broker communication”. We associate the Job Server with the repository that has the Real-Time Job Job_TestConnectivity. Finally we restart SAP Data Services by clicking “close and restart” or we restart it later using the Control Panel  => Administrative Tools => Services => SAP Data Services (right mouse click) => restart.

Access Server

We need to have an Access Server up and running. The Access Server  will receive the input messages from the JMS adapter and dispatch them to an instance of the Real-Time Service RS_TestConnectivity. In SAP Data Services Management Console choose Administrator => Management => Access Server and check if an Access Server is configured and add one if necessary. By default, the AccessServer uses port 4000.

 

Real-Time Service

We configure a Real-Time Service “RS_TestConnectivity” for our Real-Time Job Job_TestConnectivity. In SAP Data Services Management Console navigate to Administrator => Real-Time => <hostname>:4000 => Real-Time Services => Real-Time Service Configuration. Configure a new Real-Time Service “RS_TestConnectivity” and select Job_TestConnectivity with the Browse-Button:

 

Add the JobServer as Service Provider and click “Apply”. Start the Real-Time Service via Administrator => Real-Time => <hostname>:4000 => Real-Time Services => Real-Time Service Status, and click "Start":

Active MQ - Installation

We could use any JMS provider but in this case we’re using Active MQ since it can be quickly installed and configured. Download and unzip Active MQ from http://activemq.apache.org/. In this scenario we use version 5.9.0 and we install it in C:\local\ActiveMQ on the same machine as SAP Data Services. At the command line change to directory C:\local\ActiveMQ\bin and execute activemq.bat:

Active MQ console

Now, we have our JMS provider up and running and we can access the Active MQ console at http://<hostname>:8161/admin . We’re using admin / admin to login.

 

The browser should now display the homepage of the Active MQ console:

We click on the “Queues” menu to add 3 queues named “FailedQueue”, “RequestQueue” and “ResponseQueue”:

Active MQ JMS client

The SAP Data Services JMS Adapter will access the JMS client provided by Active MQ to communicate with the JMS provider. The JMS client is in activemq-all-5.9.0.jar. We will add this jar file to the ClassPath of the JMS adapter later. According to the JNDI documentation of Active MQ we need to create a jndi.properties file and either add it to the ClassPath or put it into activemq-all-5.9.0.jar. The jndi.properties file maps the JNDI names of the queues to their physical names. Create jndi.properties as shown below. You can add it to activemq-all-5.9.0.jar e.g. by using WinZip.

JMS Adapter

Now we are ready to configure our JMS Adapter in SAP Data Services. In SAP Data Services Management Console, choose Administrator => Adapter Instances => Adapter Configuration…

Choose JMSAdapter…

Enter the configuration information as shown below:

 

  • set the adapter name; here: MyJMSAdapter
  • set the Access Server hostname and port; here: localhost, 4000

 

Remove the default entry of the ClassPath and add the following files to the ClassPath. All necessary jar files - except the JMS client jar file – are located in <DS_LINK_DIR>\lib\ or <DS_LINK_DIR>\ext\lib\. Replace <DS_LINK_DIR> with the respective directory of your installation.

  • <DS_LINK_DIR>\lib\acta_adapter_sdk.jar
  • <DS_LINK_DIR>\lib\acta_broker_client.jar
  • <DS_LINK_DIR>\lib\acta_jms_adapter.jar
  • <DS_LINK_DIR>\lib\acta_tool.jar
  • <DS_LINK_DIR>\ext\lib\ssljFIPS.jar
  • <DS_LINK_DIR>\ext\lib\cryptojFIPS.jar
  • <DS_LINK_DIR>\ext\lib\bcm.jar
  • <DS_LINK_DIR>\ext\lib\xercesImpl.jar
  • C:\local\ActiveMQ\activemq-all-5.9.0.jar (make sure it contains jndi.properties)

Note: The template file JMSadapter.xml that has the default ClassPath and all other default values and choices, is located in <DS_COMMON_DIR>\adapters\config\templates. You might want to adjust this file to have other defaults when configuring a new JMS adapter. Once an adapter is configured you need to change its configuration file located in <DS_COMMON_DIR>\adapters\config. On Windows <DS_COMMON_DIR> is %ALLUSERSPROFILE%\SAP BusinessObjects\Data Services by default.


JMS Adapter - JNDI configuration

We use the Java Naming and Directory Interface (JNDI)  to configure the JMS adapter. So we chose:

 

   Configuration Type:  JNDI

 

Next we set the Active MQ JNDI Name Server URL:

 

     Server URL: tcp://localhost:61616

 

For Active MQ we need to set the JNDI context factory to org.apache.activemq.jndi.ActiveMQInitialContextFactory (see ActiveMQ documentation section JNDI support). By default this string is not offered in the drop down box in the JNDI configuration section, so we need to edit <DS_COMMON_DIR>\adapters\config\templates\JMSAdapter.xml and add the string to the pipe-delimited list in the jndiFactory entry.

Note: If MyJMSAdapter already exists, we need to edit <DS_COMMON_DIR>\adapters\config\MyJMSAdapter.xml instead.

 

     <jndiFactory Choices="org.apache.activemq.jndi.ActiveMQInitialContextFactory|     …   >

 

After navigating to Administrator => Adapter Instances => Adapter Instances  => My JMSAdapter  we choose the right string from the drop-down-list and set:

 

     Initial Naming Factory: org.apache.activemq.jndi.ActiveMQInitialContextFactory

 

Finally we set the Queue Connection Factory and Topic Connection Factory as described in the Active MQ documentation:

 

     Queue Connection Factory: QueueConnectionFactory

 

     Topic Connection Factory: TopicConnectionFactory

 

Click “Apply” to save all settings.

 

JMS Adapter - start

We are ready to check if the JMS adapter starts now. We still have to configure an operation for the adapter yet (see below) but we want to check first if our configuration works fine. There are many reasons that the adapter doesn’t start at first  - e.g. missing or wrong files in ClassPath, typos in JNDI configuration, etc. You will not find any entry in the error file and trace file in this case – these files are for messages created by the adapter when it is up and running. To find the reason for the adapter in case it doesn’t start, switch on Trace Mode=True in the JMS Adapter Configuration and restart the JMS Adapter. Then Check the Job Server’s log file in <DS_COMMON_DIR>/log/<jobserver>.log. Search for the java call the Job Server executes to launch the JMS Adapter. Copy the whole command, execute it from the command line and try to fix the problem by adjusting the command.If the JMS adapter starts properly the Adapter Instance Status will look like this:

JMS Adapter - operation configuration

Now we need to configure the  JMS operation. We’ll configure an operation of type “Get: Request/Reply” since we want our adapter to dequeue requests from the RequestQueue, pass it to the Real-Time Service, wait for the response and enqueue the response into the ResponseQueue.In DS Management Console navigate to Administrator =>  Adapter Instances => <jobserver>@<hostname>:<port> => Adapter Configuration => MyJMSAdapter => Operations and click “Add”. Select Operation Type “Get: Request/Reply and Request/Acknowledge using Queues” and click “Apply”.

 

Set the operation details as displayed below. Since our Real-Time Service will respond to requests quickly, we reduce the polling interval to 50 ms.

Click “Apply” to save the changes and restart the adapter. The adapter instance status should look like this:

JMS Adapter - test

Of course we want to see if the adapter works as expected. To do this we put a request message into the request queue and see what happens. We open the Active MQ console again (URL http://<hostname>:8161/admin, Login: admin/admin) and select “Send”. We create the request message as shown below:

After we have clicked “Send” the message is enqueued into RequestQueue, dequeued from there by the JMS adapter that passes the request to the Real-Time Service and receives the response from it. The JMS adapter finally puts the response message into ResponseQueue.

The Active MQ console should look like this after some seconds:

We have one message in the response queue. To display it we click on ResponseQueue and then on the message ID…

…and have a look at the repsonse message.  You should see the "World Hello" string in the message details.

Substitution parameters in SAP DS

$
0
0

What is substitution parameter?

 

  • Substitution parameters are used to store constant values and defined at repository level.
  • Substitution parameters are accessible to all jobs in a repository.
  • Substitution parameters are useful when you want to export and run a job containing constant values in a specific environment.

 

Scenario to use Substitution Parameters:

 

For instance, if you create multiple jobs in a repository and those references a directory on your local computer to read the source files. Instead of creating global variables in each job to store this path you can use a substitution parameter instead. You can easily assign a value for the original, constant value in order to run the job in the new environment. After creating a substitution parameter value for the directory in your environment, you can run the job in a different environment and all the objects that reference the original directory will automatically use the value. This means that you only need to change the constant value (the original directory name) in one place (the substitution parameter) and its value will automatically propagate to all objects in the job when it runs in the new environment.

 

Key difference between substitution parameters and global variables:

 

  • You would use a global variable when you do not know the value prior to execution and it needs to be calculated in the job.
  • You would use a substitution parameter for constants that do not change during execution. By using a substitution parameter means you do not need to define a global variable in each job to parameterize a constant value.

 

Global Variables

Substitution Parameters

Defined at Job Level

Defined at Repository Level

Can not be shared across Jobs

Available to all Jobs in a repository

Data-Type specific

No data type (all strings)

Value can change during job execution

Fixed value set prior to execution of Job (constants)

 

How to define the Substitution Parameters?

 

Open the Substitution Parameter Editor from the Designer by selecting

Tools > Substitution Parameter Configurations....

• You can either add another substitution parameter in existing configuration or you may add a new configuration by clicking the Create New Substitution Parameter Configuration icon in the toolbar.

• The name prefix is two dollar signs $$ (global variables are prefixed with one dollar sign). When

adding new substitution parameters in the Substitution Parameter Editor, the editor automatically

adds the prefix.

• The maximum length of a name is 64 characters.

 

In the following example, the substitution parameter $$SourceFilesPath has the value D:/Data/Staging in the configuration named Dev_Subst_Param_Conf and the value C:/data/staging in the Quality_Subst_Param_Conf configuration.

 

subst.png

 

This substitution parameter can be used in more than one Jobs in a repository. You can use substitution parameters in all places where global variables are supported like Query transform WHERE clauses, Scripts, Mappings, SQL transform, Flat-file options, Address cleanse transform options etc.  Below script will print the source files path what is defined above.

 

Print ('Source Files Path: [$$SourceFilesPath]');

 

Associating a substitution parameter configuration with a system configuration:

 

A system configuration groups together a set of datastore configurations and a substitution parameter configuration. For example, you might create one system configuration for your DEV environment and a different system configuration for Quality Environment. Depending on your environment, both system configurations might point to the same substitution parameter configuration or each system configuration might require a different substitution parameter configuration. In below example, we are using different substitution parameter for DEV and Quality Systems.

 

To associate a substitution parameter configuration with a new or existing system configuration:


In the Designer, open the System Configuration Editor by selecting

Tools > System Configurations

You may refer this blog to create the system configuration.

 

The following example shows two system configurations, DEV and Quality. In this case, there are substitution parameter configurations for each environment. Each substitution parameter configuration defines where the data source files are located. Select the appropriate substitution parameter configuration and datastore configurations for each system configuration.

system.png

At job execution time, you can set the system configuration and the job will execute with the values for the associated substitution parameter configuration.

 

Exporting and importing substitution parameters:

 

Substitution parameters are stored in a local repository along with their configured values. The DS does not include substitution parameters as part of a regular export. Therefore, you need to export substitution parameters and configurations to other repositories by exporting them to a file and then importing the file to another repository.

 

Exporting substitution parameters

  1. Right-click in the local object library and select Repository > Export Substitution Parameter
  2. Configurations.
  3. Select the check box in the Export column for the substitution parameter configurations to export.
  4. Save the file.

The software saves it as a text file with an .atl extension.

Export.png

Importing substitution parameters

The substitution parameters must have first been exported to an ATL file.

 

  1. In the Designer, right-click in the object library and select Repository > Import from file.
  2. Browse to the file to import.
  3. Click OK.

Better Python Development for BODS: How and Why

$
0
0

Not enough love: The Python User-Defined Transform


ss.png

 

In my opinion, the python user-defined transform (UDT) included in Data Services (Data Quality -> UserDefined) bridges several gaps in the functionality of Data Services.  This little transform allows you to access records individually and perform any manipulation of those records.  This post has two aims: (1) to encourage readers to consider the Python transform the next time things get tricky and (2) to give experienced developers an explanation on how to speed up their Python development in BODS.

 

Currently, if you want to apply some manipulation or transformation record by record you have two options:

  1. Write a custom function in the BODS Scripting language and apply this function as a mapping in a query.
  2. Insert a UDT and write some python code to manipulate each record.

 

How to choose?  Well, I would be all for keeping things within Data Services, but the built-in scripting language is a bit dry of functionality and doesn't give you direct access to records simply because it is not in a data flow.  In favour of going the python route are the ease and readability of the language, the richness of standard functionality and the ability to import any module that you could need.  Furthermore with Python data can be loaded into memory in lists, tuples or hash-table like dictionaries.  This enables cross-record comparisons, aggregations, remapping, transposes and any manipulation that you can imagine!  I hope to explain how useful this transform is in BODS and how nicely it beefs up the functionality.

 

For reference, the UDT is documented chapter 11 of http://help.sap.com/businessobject/product_guides/sbods42/en/ds_42_reference_en.pdf

The best way to learn python is perhaps just to dive in, keeping a decent tutorial and reference close at hand.  I won't recommend a specific tutorial; rather google and find one that is on the correct level for your programming ability!

 

Making Python development easier

When developing I like to be able to code, run, check (repeat).  Writing Python code in the Python Smart Editor of the UDT is cumbersome and ugly if you are used to a richer editor.  Though it is a good place to start with learning to use the Python in BODS because of the "I/O Fields" and "Python API" tabs, clicking through to the editor every time you want to test will likely drive you mad.  So how about developing and testing your validation function or data structure transform on your local machine, using your favourite editor or IDE (personally I choose Vim for Python)?  The following two tips show how to achieve this.

 

Tip#1: Importing Python modules

Standard Python modules installed on the server can be imported as per usual using import.  This allows the developer to leverage datetime, string manipulation, file IO and various other useful built-in modules.  Developers can also write their own modules, with functions and classes as needed.  Custom modules must be set up on the server, which isn't normally accessible to Data Services Designers.

 

The alternative is to dynamically import custom modules given their path on the server using the imp module.  Say you wrote a custom module to process some records called mymodule.py containing a function myfunction.  After placing this module on the file server at an accessible location you can access its classes and functions in the following way

 

import imp
mymodule = imp.load_source('mymodule', '/path/to/mymodule.py')
mymodule.myfunction()

 

This enables encapsulation and code reuse.  You can either edit the file directly on the server, or re-upload it with updates, using your preferred editor.  What I find particularly useful is that as a data analyst/scientist/consultant/guy (who knows these days) I can build up an arsenal of useful classes and functions in a python module that I can reuse where needed.

 

Tip#2: Developing and testing from the comfort of your own environment

To do this you just need to write a module that will mimic the functionality of the BODS classes.  I have written a module "fakeBODS.py" that uses a csv file to mimic the data that comes into a data transform (see attached).  Csv input was useful because the transforms I was building were working mostly with flat files.  The code may need to be adapted slightly as needed.

 

Declaring instances of these classes outside of BODS allows you to compile and run your BODS Python code on your local machine.  Below is an example of a wrapping function that I have used to run "RunValidations", a function that uses the DataManager and Collection, outside of BODS.  It uses the same flat file input and achieves the same result!  This has sped up my development time, and has allowed me to thoroughly test implementations of new requirements on a fast changing project.

 

def test_wrapper():     import fakeBODS     Collection = fakeBODS.FLDataCollection('csv_dump/tmeta.csv')     DataManager = fakeBODS.FLDataManager()     RunValidations(DataManager, Collection, 'validationFunctions.py', 'Lookups/')

Limitations of UDT

There are some disappointing limitations that I have come across that you should be aware of before setting off:

  • The size of an output column (as of BODS 4.1) is limited to 255 characters.  Workaround can be done using flat files.
  • You can only access data passed as input fields to the transform.  Variables for example have to be mapped to an input column before the UDT if you want to use them in your code.
  • There is no built-in functionality to do lookups in tables or execute sql through datastore connections from the transform.

 

How a powerful coding language complements a rich ETL tool

Python code is so quick and powerful that I am starting to draw all my solutions out of Data Services into custom python modules.  It is faster, clearer for me to understand, and more adaptable.  However, this is something to be careful of.  SAP BODS is a great ETL tool, and is a brilliant cockpit from which to direct your data flows because of its high-level features such as authorizations, database connections and graphical job and workflow building.  The combination of the two, in my opinion, makes for an ideal ETL tool.

 

This is possibly best demonstrated by example.  On a recent project (my first really) with the help of Python transforms and modules that I wrote I was able to solve the following:

  • Dynamic table creation and loading
  • Executeable metadata (functions contained in excel spreadsheets)
  • Complicated data quality analysis and reporting (made easy)
  • Reliable unicode character and formatting export from excel

Data Services 4.1 on the other hand was indispensable in solving the following requirements

  • Multi-user support with protected data (aliases for schemas)
  • Maintainable centralized processes in a central object library with limited access for certain users
  • A framework for users to build their own Jobs using centralized processes.

The two complemented each other brilliantly to reach a solid solution.

 

Going forward

With the rise of large amounts of unstructured data and the non-trivial data manipulations that come with it, I believe that every Data analyst/scientist should have a go-to language in their back pocket.  As a trained physicist with a background in C/C++ (ROOT) I found Python incredibly easy to master and put it forward as one to consider first.

 

I do not know what the plan is for this transform going forward into the Data Services Eclipse workbench, but hopefully the merits of allowing a rich language to interact with your data inside of BODS are obvious enough to keep it around.  I plan to research this a bit more and follow up this post with another article.

 

about me...

This is my first post on SCN.  I am new to SAP and have a fresh perspective of the products and look forward to contributing on this topic if there is interest.  When I get the chance I plan to blog about the use of Vim for a data analyst and the manipulation of data structures using Python.

SCD Type 1 Full Load With Error Handle - For Beginners

$
0
0

This example may help us to understand the usage of SCD Type 1 and with how to handle the error messages.


Brief about Slowly Changing Dimensions: Slowly Changing Dimensions are dimensions that have data that changes over time.

There are three methods of handling Slowly Changing Dimensions are available: Here we are concentrating only on SCD Type 1.


Type 1-  No history preservation - Natural consequence of normalization.

 

For a SCD Type 1 change, you find and update the appropriate attributes on a specific dimensional record. For example, to update a record in the

SALES_PERSON_DIMENSION table to show a change to an individual’s SALES_PERSON_NAME field, you simply update one record in the

SALES_PERSON_DIMENSION table. This action would update or correct that record for all fact records across time. In a dimensional model, facts have no meaning until you link them with their dimensions. If you change a dimensional attribute without appropriately accounting for the time dimension, the change becomes global across all fact records.

 

This is the data before the change:

 

SALES_PERSON_

KEY

SALES_PERSON_

ID

NAMESALES_TEAM
1500120Doe, John BAtlanta

 

This is the same table after the salesperson’s name has been changed:


SALES_PERSON_

KEY

SALES_PERSON_

ID

NAMESALES_TEAM
1500120Smith, John BAtlanta


However, suppose a salesperson transfers to a new sales team. Updating the salesperson’s dimensional record would update all previous facts so that the

salesperson would appear to have always belonged to the new sales team. This may cause issues in terms of reporting sales numbers for both teams. If you want to preserve an accurate history of who was on which sales team, Type 1 is not appropriate.


Below is the step by Step Batch Job creation using SCD Type 1 using error Handling.


Create new job

 

 

Add Try and "Script" controls from the pallet and drag to the work area

 

Create a Global variable for SYSDATE

 

6.png

 

Add below script in the script section.

 

# SET TODAYS DATE

$SYSDATE = cast( sysdate( ), 'date');

print( 'Today\'s date:' || cast( $SYSDATE, 'varchar(10)' ) ); 

 

 

Add DataFlow.

 

Now double click on DF and add Source Table.

 

Add Query Transformation

 

Add LOAD_DATE new column in Query_Extract

Map created global variable $SYSDATE. If we mention sysdate() this functional call every time which may hit the performance.

 

14.png

 

Add another query transform for lookup table

 

  Create new Function Call for Lookup table.

 

 

20.png

 

21.png

 

Required column added successfully via Lookup Table.

 

Add another Query Transform. This query will decide whether source record will insert and update.

 

Now remove primary key to the target fileds.

 

 

 

Create new column to set FLAG to update or Insert.

 

Now write if then else function if the LKP_PROD_ID is null update FLAG with INS if not with UPD.

 

ifthenelse(Query_LOOKUP_PRODUCT_TIM.LKP_PROD_KEY is null, 'INS', 'UP')

 

27.png

 

Now Create case Transform.

 

30.png

 

 

Create two rules to FLAG filed to set “INS” or ”UPD”

Create Insert and Update Query to align the fields

Change LKP_PROD_KEY to PROD_KEY and PROD_ID to SOURCE_PROD_ID for better understanding in the target table.

Now create  Key Generation transform to generate Surrogate key

Select Target Dimension table with Surrogate key (PROD_KEY)

Set Target instance

 

 

43.png

Add a Key_Generation transformation for the Quary_Insert to add count for the new column.

 

And for Query _Update we need Surrogate key and other attributes. Use the Map Operation transform to update records.

 

By default Normal mode as Normal. We want to update records in normal mode.

 

53.png

 

Update Surrogate key, Product key and other attributes.

 

Go back to insert target table -->  Options --> Update Error Handling as below:

55.png

 

Go back to Job screen and create catch block

 

57.png

 

Select required exception you want to catch. and Create script to display error messages

 

60.png

 

Compose your message to print errors in the script_ErrorLogs as below.

 

print( 'Error Handling');

print( error_message() || ' at ' || cast( error_timestamp(), 'varchar(24)'));

raise_exception( 'Job Failed');

 

now Validate script before proceed further.

 

Now these messages will catch errors with job completion status.

Now create a script to print error message if there is any database rejections:

 

65.png

 

# print ( ' DB Error Handling');

if( get_file_attribute( '[$$LOG_DIR]/ VENKYBODS_TRG_dbo_Product_dim.txt ', 'SIZE') > 0 )

raise_exception( 'Job Failed Check Rejection File');

 

note: VENKYBODS_TRG_dbo_Product_dim.txt is the file name which we mentioned in the target table error handling section.

 

Before Execute, Source and Target table data of Last_updated_Date.

 

73.png

 

Now Execute the job and we can see the Last_Updated_Dates.

72.png

 

Now try to generate any error to see the error log captured our error Handling.

 

try to implement the same and let me know if you need any further explanation on this.

 

Thanks

Venky

Functions - Data Services

$
0
0

This document describes briefly all available functions of Data Services.

Put Together A Data Archiving Strategy And Execute It Before Embarking On Sap Upgrade

$
0
0

A significant amount is invested by organizations in a SAP upgrade project. However few really know that data archiving before embarking on SAP upgrade yields significant benefits not only from a cost standpoint but also due to reduction in complexity during an upgrade. This article not only describes why this is a best practice  but also details what benefits accrue to organizations as a result of data archiving before SAP upgrade. Avaali is a specialist in the area of Enterprise Information Management.  Our consultants come with significant global experience implementing projects for the worlds largest corporations.

 

Archiving before Upgrade

It is recommended to undertake archiving before upgrading your SAP system in order to reduce the volume of transaction data that is migrated to the new system. This results in shorter upgrade projects and therefore less upgrade effort and costs. More importantly production downtime and the risks associated with the upgrade will be significantly reduced. Storage cost is another important consideration: database size typically increases by 5% to 10% with each new SAP software release – and by as much as 30% if a Unicode conversion is required. Archiving reduces the overall database size, so typically no additional storage costs are incurred when upgrading.

 

It is also important to ensure that data in the SAP system is cleaned before your embark on an upgrade. Most organizations tend to accumulate messy and unwanted data such as old material codes, technical data and subsequent posting data. Cleaning your data beforehand smoothens the upgrade process, ensure you only have what you need in the new version and helps reduce project duration. Consider archiving or even purging if needed to achieve this. Make full use of the upgrade and enjoy a new, more powerful and leaner system with enhanced functionality that can take your business to the next level.

 

Archiving also yields Long-term Cost Savings

By implementing SAP Data Archiving before your upgrade project you will also put in place a long term Archiving Strategy and Policy that will help you generate on-going cost savings for your organization. In addition to moving data from the production SAP database to less costly storage devices, archived data is also compressed by a factor of five relative to the space it would take up in the production database. Compression dramatically reduces space consumption on the archive storage media and based on average customer experience, can reduce hardware requirements by as much as 80% or 90%. In addition, backup time, administration time and associated costs are cut in half. Storing data on less costly long-term storage media reduces total cost of ownership while providing users with full, transparent access to archived information.

SAP DataServices 4.2 Transports Re-Deployment Issues

$
0
0

Data Load from SAP ECC to SAP HANA

 

This a workaround to connect the SAP system as source and SAP HANA as target, establish connections using data services data stores

and identify the issues that incurs during the process.

 

Creating a Data store to connect to ECC system:

 

  • Right click in the data store tab
  • Data store name: Provide meaningful name
  • Data Store Type:  Select SAP Application from the drop down
  • Database Server Name:  Enter provided server name
  • User Name:
  • Password:

 

ECC_conn.jpg

 

 

 

In the Advance section,

 

  • Data transfer method: Select Shared directory( in my case)
  • Note: Select RFC, if RFC connection is established.
  • Working Directory on SAP Server:  Provide the working directory path.
  • Application path to shared directory: Path on your local Directory.

 

 

 

Creating a Data store to connect to HANA system: 

  • Data store name: Provide meaningful name
  • Data Store Type:  Select ‘Database’ from the drop down
  • Database Type:  SAP HANA
  • Database Version: HANA1.X
  • Select the check box ‘Use data store name (DSN)’
  • Data source Name:
  • User Name:
  • Password:

 

HANA_conn.jpg

 

After successful creating of both the data stores, import the respective source and target tables

 

è Create a Job followed data store. Drag the source table, use query transform, map required fields to the output schema, connect to the target table, validate and execute.

 

Conn_flow.png

è

AAAfter successful execution of the job, record count can be seen in the monitor log as shown below.

 

Dataload.png

 

 

ISSUES:

è  Make sure to have read and write access to the working directory from BODS System.

    Working directory: E:\usr\sap\XYZ\ABCDEF\work

è  If in case of any issues, follow up with the basis team.

è  Make sure both BODS and ECC are in same domain. The users can be added from one system to another system if they are in same system.

è  For the current version BODS 4.2, Had an issue with the transport files. For the same, found a note -1916294.

 

note.jpg

 

è  Basis team implemented the above note.

è  After implementing the above note, got the below issue when executed the job

 

RFC.jpg

 

è  For the above issue, basis team granted permission to the functional module   \BODS\RFC_ABAP_INSTALL_AND_RUN

 

èMake sure that the account has the following authorizations:

*S_DEVELOP

*S_BTCH_JOB

*S_RFC

*S_TABU_DIS

*S_TCODE

How to set exact value for ROWS PER COMMIT in Target table

$
0
0

As we know that the default value to set on rows per commit is 1000, and maximum value to set is 50000.

BODS recommends to set the rows per commit value between 500 and 2000 for best performance.

 

The value of rows per commit depends based on number of columns in the target table.

 

Here is the formula for the same -

 

Calculate Rows per commit value = max_IO_size (64K for most of the platform)/row size

 

  

Row size = (# of columns)*(20 bytes (average column size) )* 1.3 (30% overhead)

 

 

 

Eg: - If no. of columns is 10 then row size = 10*20*1.3=26

 

 

CommitRowValue
= 64 K/ 26 = 2.4 K

How to improve performace while using auto correct load

$
0
0

Using auto correct load option in target table will degrade the performance of BODS jobs. This prevents a full push-down operation from the

source to the target when the source and target are in different datastores.

 

But then Auto correct load option is an inavoidable scenario where no duplicated rows are there in the target. and its very useful for data recovery operations.

 

 

When we deal with large data volume how do we improve performance?

 

Using a Data_Transfer transform can improve the performance of a job. Lets see how it works :-)

 

Merits:

  • Data_Transfer transform can push down the operations to database server.
  • It enables a full push-down operation even if the source and target are in different data stores.
  • This can be used after query transforms with GROUP BY, DISTINCT or ORDER BY functions which do not allow push down

 

The idea behind here is to improve the performance is to push down to database level.

 

Add a Data_Transfer transform before the target to enable a full push-down from the source to the target. For a merge operation there should not be any duplicates in the source data. Here the data_transfer pushes down the data to database and update or insert record into the target table until duplicates are not met in source.

 

 

 

 

 

 

 

 


DS Standard Recovery Mechanism

$
0
0

Introduction:

This document gives overview of standard recovery mechanism in Data Services.

 

Overview: Data Services provides one of the best inbuilt features to recover job from failed state. By enabling recovery,  job will start running from failed instance

 

DS provides 2 types of recovery

Recovery: By default recovery is enabled at Dataflow level i.e. Job will always start from the dataflow which raised exception.

Recovery Unit: If you want to enable recovery at a set of actions, you can achieve this with recovery unit option. Define all your actions it in a Workflow and enable recovery unit under workflow properties. Now in recovery mode this workflow will run from beginning instead of running from failed point.


When recovery is enabled, the software stores results from the following types of steps:

  • Work flows
  • Batch data flows
  • Script statements
  • Custom functions (stateless type only)
  • SQL function
  • exec function
  • get_env function
  • rand function
  • sysdate function
  • systime function


Example:

This job will load data from Flat file to Temporary Table. (I am repeating the same to raise Primary Key exception)

 

 

Running the job:

 

To recover the job  from failed instance, first job should be executed by enabling recovery.  We can enable under execution properties.

Job3.png

Below Trace Log shows that Recovery is enabled for this job.


Job4.png

job failed at 3rd DF in 1st WF. Now i am running job in recovery mode

Job5.png

Trace log shows that job is running in Recovery mode using recovery information from previous run and Starting from Data Flow 3 where exception is raised.

Job6.png

DS Provides Default recovery at Dataflow Level

 

Recovery Unit:

With recovery, job will always starts at failed DF in recovery run irrespective of the dependent actions.


Example: Workflow WF_RECOVERY_UNIT has two Dataflows loading data from Flat file. If any of the DF failed, then both the DFs have to run again.


To achieve, This kind of requirement, we can define all the Activities and make that as recovery unit. When we run the job in recovery mode, if any of the activity is failed, then it starts from beginning.


To make a workflow as recovery unit, Check recovery Unit option under workflow properties.

Job2.png

Once this option is selected,on the workspace diagram, the black "x" and green arrow symbol indicate that a work flow is a recovery unit.

Job1.png

Two Data Flows under WF_RECOVERY_UNIT

 

 

Running the job by enabling recovery , Exception encountered at DF5.

 

 

Now running in recovery mode. Job uses recovery information of previous run. As per my requirement, job should run all the activities defined under Work Flow WF_RECOVERY_UNIT instead of failed DataFlow.

Job5.png

 

Now Job Started from the beginning of the WF_RECOVERY_UNIT and all the Activities defined inside the workflow will run from the beginning insted of starting from Failed DF (DF_RECOVERY_5).

 

Exceptions:

when you specify a work flow or a data flow should only execute once, a job will never re-execute that work flow or data flow after it completes successfully, except if that work flow or data flow is contained within a recovery unit work flow that re-executes and has not completed successfully elsewhere outside the recovery unit.

It is recommended that you not mark a work flow or data flow as Execute only once when the work flow or a parent work flow is a recovery unit.

Jobs Traceability Matrix - Query in BODS

$
0
0

All the jobs and its associated component details can be retrieved by executing Query against below metadata tables

 

Database:  <Repository DB> <Repository login>

 

Ex: UBIBOR01  d2_14_loc

Tables: ALVW_PARENT_CHILD, AL_PARENT_CHILD, AL_LANG, AL_USAGE, etc.


This is a query which will list all the jobs and their traverse paths till Source / Target table

 

 

 

 

select Connection_Path.PATH || Other_Objects.DESCEN_OBJ || '( ' || Other_Objects.DESCEN_OBJ_USAGE || ' ) ' PATH
     , substr(Connection_Path.PATH, 2 , instr(Connection_Path.PATH, ' ->> ', 2)-2) Job_Name

FROM
(
SELECT DISTINCT PARENT_OBJ
      , PARENT_OBJ_TYPE
      , SYS_CONNECT_BY_PATH(PARENT_OBJ,' ->> ')|| ' ->> ' PATH
FROM ALVW_PARENT_CHILD
START WITH PARENT_OBJ_TYPE = 'Job'
CONNECT BY PRIOR DESCEN_OBJ = PARENT_OBJ
) Connection_Path,
(
SELECT  PARENT_OBJ
      , PARENT_OBJ_TYPE
      , DESCEN_OBJ
      , DESCEN_OBJ_USAGE
FROM ALVW_PARENT_CHILD
WHERE PARENT_OBJ_TYPE = 'DataFlow'
and
DESCEN_OBJ_TYPE = 'Table'
)Other_Objects
WHERE
Connection_Path.PARENT_OBJ = Other_Objects.PARENT_OBJ
AND
Connection_Path.PARENT_OBJ_TYPE = Other_Objects.PARENT_OBJ_TYPE

Query to get all the dependent objects and their traverse paths of a job

$
0
0

For a given job this query returns all the dependent objects and their traverse paths.(Job name should be given in
the outer where clause <<JOB_NAME>>)

 

 

SELECT JOB_NAME   ,   OBJECT   ,   OBJECT_TYPE   ,   PATH
FROM 
(
SELECT Other_Objects.DESCEN_OBJ OBJECT     , Other_Objects.DESCEN_OBJ_USAGE OBJECT_TYPE     , Connection_Path1.PATH || Other_Objects.DESCEN_OBJ || '( ' || Other_Objects.DESCEN_OBJ_USAGE || ' ) ' PATH     , substr(Connection_Path1.PATH, instr(Connection_Path1.PATH, ' ->> ', 1)+5 , instr(Connection_Path1.PATH, ' ->> ', 2)-(instr(Connection_Path1.PATH, ' ->> ', 1)+5)) JOB_NAME
FROM
(
SELECT DISTINCT PARENT_OBJ      , PARENT_OBJ_TYPE      , SYS_CONNECT_BY_PATH(PARENT_OBJ,' ->> ')|| ' ->> ' PATH
FROM ALVW_PARENT_CHILD
START WITH PARENT_OBJ_TYPE = 'Job'
CONNECT BY PRIOR DESCEN_OBJ = PARENT_OBJ
) Connection_Path1,
(
SELECT  PARENT_OBJ      , PARENT_OBJ_TYPE      , DESCEN_OBJ      , DESCEN_OBJ_USAGE 
FROM ALVW_PARENT_CHILD
WHERE PARENT_OBJ_TYPE = 'DataFlow'
and 
DESCEN_OBJ_TYPE = 'Table'
)Other_Objects
WHERE 
Connection_Path1.PARENT_OBJ = Other_Objects.PARENT_OBJ
AND
Connection_Path1.PARENT_OBJ_TYPE = Other_Objects.PARENT_OBJ_TYPE
UNION
SELECT Connection_Path2.PARENT_OBJ OBJECT     , Connection_Path2.PARENT_OBJ_TYPE OBJECT_TYPE     , Connection_Path2.PATH PATH     , substr(Connection_Path2.PATH, instr(Connection_Path2.PATH, ' ->> ', 1)+5 , instr(Connection_Path2.PATH, ' ->> ', 2)-(instr(Connection_Path2.PATH, ' ->> ', 1)+5)) JOB_NAME
FROM
(
SELECT DISTINCT PARENT_OBJ      , PARENT_OBJ_TYPE      , SYS_CONNECT_BY_PATH(PARENT_OBJ,' ->> ')|| ' ->> ' PATH
FROM ALVW_PARENT_CHILD
START WITH PARENT_OBJ_TYPE = 'Job'
CONNECT BY PRIOR DESCEN_OBJ = PARENT_OBJ
) Connection_Path2
) WHERE 
JOB_NAME LIKE <<JOB_NAME>>

Demo on Real time job

$
0
0

REAL TIME JOB DEMO

 

A real-time job is created in the Designer and then configured in the Administrator as a real-time service associated with an Access Server into the management console..

This Demo will briefly explain the management console setting ..

We can execute the Real time job from any third party tool. let us use SOAPUI(third party tool) to demonstrate our Real time job.

Below is the screenshot of Batch Job used to create a sample table in the database(First Dataflow) and create the XML target file(second Dataflow). The XML Target file(Created in the second Dataflow) can be used to create the XML MESSAGE SOURCE in the real time job.

Below is the screenshot transformation logic of dataflow(DF_REAL_Data)

Capture.JPG

Below is the screenshot transformation logic of dataflow(DF_XML_STRUCTURE)

Capture.JPG

Below is the screenshot transformation logic of Query Transform "Query" used in DF_XML_STRUCTURE

Capture.JPG

Below is the screenshot transformation logic of Query Transform "Query" used in DF_XML_STRUCTURE

Capture.JPG

In the Below second query transform to nest the data. Select the complete Query from schema IN and import under the Query of schema out

Capture.JPG

Creation of the XML schema from the Local Object Library

Capture.JPG

Capture.JPG

Go to the Second Query again and make the Query name same as in the XML schema(Query_nt_1).

Note: If we do not change the Query name it give a ERROR

In the Below Image the Query name is rename the same name what its displayed in the XML schema

Capture.JPG

The Below image show the creation of the Real time job.

Capture.JPG

Capture.JPG

Capture.JPG

Capture.JPG

Capture.JPG

To Test and Validate the job

In the Demo, The End user pass the EMP_ID(1.000000) using the third party tool which triggers the Real-time job taking the input as XML MESSAGE SOURCE and obtains other details from the database table based on the EMP_ID Value to the End user in XML MESSAGE TARGET..

  Below is the output of XML file ..

Capture.JPG

FINALLY RUN THE REAL-TIME JOB USING SOAP UI TOOL :

1. Run the SoapUI tool

2. Create the project  browser the WSDL file.

3. Under project Real-time servicescheck the project namesend the request.

4. Request Window will open now enter the search string in it.

5. Finally the record will come

Demo on Real time job and configuration in Data service

$
0
0

REAL TIME JOB DEMO

A real-time job is created in the Designer and then configured in the Administrator as a real-time service associated with an Access Server into the management console..

This Demo will briefly explain the management console setting ..

We can execute the Real time job from any third party tool. let us use SOAPUI(third party tool) to demonstrate our Real time job.

Below is the screenshot of Batch Job used to create a sample table in the database(First Dataflow) and create the XML target file(second Dataflow). The XML Target file(Created in the second Dataflow) can be used to create the XML MESSAGE SOURCE in the real time job.

Below is the screenshot transformation logic of dataflow(DF_REAL_Data)

Capture.JPG

Below is the screenshot transformation logic of dataflow(DF_XML_STRUCTURE)

Capture.JPG

Below is the screenshot transformation logic of Query Transform "Query" used in DF_XML_STRUCTURE

Capture.JPG

Below is the screenshot transformation logic of Query Transform "Query" used in DF_XML_STRUCTURE

Capture.JPG

Capture.JPG

Capture.JPG

Capture.JPG

Capture.JPG

 

Capture.JPG

Below image show the creation of the Real time job in Data services.

Capture.JPG

Capture.JPG

Capture.JPG

Capture.JPG

Capture.JPG

Capture.JPG

 

FINALLY RUN THE REAL-TIME JOB USING SOAP UI TOOL

  1. Run the SoapUI tool
  2. Create the project à browser the WSDL file.
  3. Under project àReal-time servicesàcheck the project nameàsend the request.
  4. Request Window will open now enter the search string in it.
  5. Finally the record will come.
Viewing all 236 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>