SlideShare une entreprise Scribd logo
1  sur  191
Télécharger pour lire hors ligne
INFORMATICA
************************
Informatica is a GUI(Graphical User Interface) based tool and it is useful to do ETL
work. The famous versions of Informatica are : 5, 6, 7.1, 8.6, 9
Architecture of Informatica :
Different Components of Informatica :
1) PowerCenter Administration Console
2) PowerCenter Repository Manager
3) PowerCenter Designer
4) PowerCenter Workflow Manager
5) PowerCenter Workflow Monitor
Different Roles of Informatica :
In any Informatica project, there mainly two roles : Developer and Administrator
à Developer will create Mappings and Workflow s by using Designer and Workflow
Manager respectively.
à Administrator is responsible to do all Admin activities, like, creating Users, Groups,
Providing permissions and running the Workflows.
* List of Transformations *
Following are the list of Transformations available in Power Designer tool:
1) Aggregator Transformation
2) Source Qualifier Transformation
2) Expression Transformation
3) Filter Transformation
4) Joiner Transformation
5) Lookup Transformation
6) Normalizer Transformation
7) Rank Transformation
8) Router Transformation
9) Sequence Generator Transformation
10) S tored Procedure Transformation
11) Sorter Transformation
12) Update Strategy Transformation
13) XML Source Qualifier Transformation
16) Union Transformation.
17) Transaction Control Transformation
* POWERCENTER DESIGNER *
à It is Windows based Client and it is useful to create mappings.
à To do any work with PowerCenter Designer window, we go through the following
steps :
1) Creating ODBC connection to the Source Database
2) Importing Source Metadata.
3) Creating ODBC connection to the Target Database.
4) Importing Target Metadata.
5) Creating Mapping
6) Creating Transformations.
Step 1 : Creating ODBC connection to the Source Database :
Goto Startà click on Control Panelà Click on Administrative Toolsà click on
DataSources(ODBC) option then one window will open in that click on System DSN tab
and fig as below :
Click on Add button, then another window will open , in that select DataSource as
Oracle in OraDb10g_home1(2) option as below:
à Click on Finish buttonà One window will open as:
In that provide Data Source Name as : RRITEC_SOURCE
TNS Service Name as : ORCL
User ID as : SOURCE_USER
Click on Test Connection button à Click on Ok. Then Connection Successful message
window will get as :
Click on Okà click on Ok.Now, the ODBC connection creation is successful for the
Source Database.
Step 2 : Importing Source Metadata :
Go to Startà click on RUNà type serives.m sc in that run window as below:
Click on Okà then one window will open it shows all the servicesà in thatselect
Informatica Services 8.6.0à click on start, if it is already start in mode, then click on
Restart link as below :
à go to Startà All Programsà Informatica Poercenter 8.6.0à client à Powercenter
Designer à click on Okà then Designer window will open.
à right click on our created Repository ie.,rep_05 à click on Connectà one window
will open and it asks the Username and password.Provide Username and Password as
Administrator as shown below window :
à click on Connect.
à right click on our folder ie., RRITEC folderà click on Open.
à click on Source Analyzer icon from the tool bar asmarked by Red circle.
à go to Sources menu à click on Import from Database option, then one window will
open à in that provide ODBC datasource as RRITEC_SOURCE, Username and
Password as SOURCE_USER as shown below:
à Click on Connectà under Select Tables sub window, which is marked by B rown
colored window, select EMP and DEPT tablesà click on Okà go to Repository menu
à click on Save.
Step -3 : Creating ODBC connection to Target Database:
à Goto SqlPlus toolà type as / as sysdba à click on login.
à after opening the Sql Pluswindow, type as
> CREATE USER TARGET_USER Identified By TARGET_USER;
> GRANT DBA TO TARGET_USER;
à connect or login into Sql Plus using above UserName and Password as:
CONN TARGET_USER @ ORCL---- > press enter
Password : TARGET_USER;
Ø Select * From Tab;
Ø Create Table T_EMP as Select * From Scott.Emp Where 1=2 ;
à go to Startà click on Control Panelà Click on Administrative Toolsà Click on
Data Sources(ODBC)à then open one window as below :
Click on Add button à then one window will Open and select Oracle in
OraDb10g_home2 option as shown in below :
à click on Finish, then new window will openà in that
provide Data Source Name as : RRITEC_TARGET
TNS Service Name as : ORCL
User ID as : TARGET_USER
à click on Test Connectionà one small window will open as :
provide Service Name : ORCL
User Name : TARGET_USER
Password : TARGET_USER
à click on Ok.
Step -4 :Importing Target Metadata :
à go to Informatica Designer toolà click on Target Designer icon as :
à go to targets menuà click on Import From Databaseà then new window will open
à in that provide ODBC data source as : RRITEC_TARGET
Username as : TARGET_USER
Password as : TARGET_USER
à Click on Connectà then new window will openà in that expand TARGET_USER
schema which is marked by Brown color as below :
à expand TABLESà select the Target Table and click on Okà go to Repository menu
and click on Save.
Step 5 :Creating Mapping :
à in the Designer toolà click on Mapping Designer iconà go to Mappings menuà
click on Createà give the name as m_First_Mapping as shown below :
à click on Ok.
à from the Repository navigator, expand RRITEC folderà expand Sources tabà Drag
and drop EMP table into Workspace of mapping windowà expand targets tabà drag
and drop T_EMP table besides to Source Qualifiertable (SQ_EMP) in workspaceà map
the corresponding ports from SQ_EMP to target table T_EMP à go to repository menu
à click on Save.
* POWERCENTER WORKFLOW MANAGER *
à It is Useful to CreateTasks andWorkflows.
à In Creation of Workflow, we have below steps :
1) Creating Workflow
2) Creating Connection
3) Creating Task
4) Linking Task and Workflow
5) Running Workflow
1) Creating Workflow :
Start à All Programsà Informatica Powercenter 8.6.0à Client à click on
PowerCenter Workflow Managerà then window will look like as
à right click on our Repository ie., rep_05 à click on Connectà new window will
open and provide Username and Password as Administrator.
à click on connectà Right click on RRITEC folderà click on O pen à click on
Workflow Designer icon as shown in below :
à go to Workflows menuà click on Createà name it as First_Workflow :
à click on Okà then workflow workspace will look as below :
à Goto Repository menuà click on Save.
2) Creating Connection :
à For SOURCE :
In the Connections menuà click on Relational optionà new window will open as :
à select Type as ALLà click on New buttonà new window will open and select the
Oracle from list of available options as shown below :
à click on Okà then new window will open
à in that Provide name as SOURCE ,
Username as : SOURCE_USER
Password as : SOURCE_USER
Connection String as : ORCL as shown below :
Click on Ok button à
Similarly, create one more connection for Target as same procedure:
the process is same as above , but here we need to provide the details as below :
Provide name as SOURCE ,
Username as : SOURCE_USER
Password as : SOURCE_USER
Connection String as : ORCL as shown below :
à click on Ok.
3) Creating Task :
à In the Workflow Manager windowà goto Tasks menuà click on Createà new
window will open, in that select the type as Session and name as : First Session as shown
below :
à click on Createà click on Doneà then new window will open as :
From the above window, select our mapping ie., First_Mapping from the list of all
available mappings in our Repository ( but here it is not available, because I didn t save
our mapping while preparing this document, that s why we didn t observe that.)à click
on Okà then our workflow window will look like as :
à Double click on First_Sessionà new window will open, in that click on Mapping tab,
now the window will look as :
à from the Navigator of above windowà select SQ_EMP under sources tabà the
window will change as :
à click on Down arrow mark , which is marked by Brown color, then another w indow
will open as :
In that we need to select our newly created SOURCE, because what we selected from
Navigator in above, is also Sourceà click on Ok.
à similarly, same as above.,select our target table ie; T_EMP and click on Down arrow
mark, and select the TARGET and click on Ok.
4) Link Task and Workflow :
à goto Tasks menuà click on Link Task optionà just click the workflow Start which
is represented by Green Arrow and drag the cursor to First_Sessionà then the flow will
look like as below :
à goto Repository menuà click on Save.
5) Running the Workflow :
à Right click on Workflow ie., Green colored Arrowà click on Start workflow from
task.
TRANSFORMATIONS
********************************
à In Informatica, Transformations help to transform the source data according to the
requirements of target system and it ensure the quality of the data being loaded into
target.
Transformations are of two types: Active and Passive.
Active Transformation :
An active transformation can change the number of rows that pass through it from source
to target i.e it eliminates rows that do not meet the condition in transformation.
Passive Transformation :
A passivetransformation does not change the number of rows that pass through it i.e it
passes all rows through the transformation.
à Transformations can be Connected or UnConnected.
Connected Transformation :
Connected transformation is connected to other transformations or directly to target table
in the mapping.
UnConnected Transformation :
An unconnected transformation is not connected to other transformations in the mapping.
It is called within another transformation, and returns a value to that transformation.
FILTER TRANSFORMATION
*************************************
à It isActive and Connected type Transformation
à A filter condition returns TRUE or FALSE for each row, that the Integration Service
evaluates, depending on whether a row meets the specified condition. For each row that
returns TRUE, the Integration Services pass through the transformation. For each row
that returns FALSE, the Integration Service drops and writes a message to the session
log.
à For any task we will going to do, its good practice to draw the one diagram firstly, to
get clear idea about task, that is DFD (Data Flow Diagram ) or MFD (Mapping Flow
Diagram ). So, here also, we are going to draw the sample Data Flow Diagram.
Step-1, 2 and 3 are same here also. SoI m not giving here, you can do it !
Step-4 :Creating and Importing Target Table Metadata :
As we keep in mind that, if we want to import any table structure into the designer
window, to place the target data, first that table structure is to be there in the Database. If
it not there, then Create it first using below procedure.
Go to Sql Plus, give username asTAREGT _USER , password asTAREGT_USER , and
Host string as ORCL,then click on Ok.
After opening the SqlPlus window, type the below syntax to create the table definition as:
CREATE TABLE F_EMP
( Empno Number(10),
Ename Varchar2(20),
Job Varchar2(50),
Sal Number(10,2),
Comm Number(10,2),
Deptno Number(10)
) ;
Now, open the Informatica Designer window, right click on Repository( here rep_05)
from the Navigator windowà click on C onnect à then small window will open and it
asks Username and Password as:
à provide Username and Password as Administratorà click onConnect then our
created Folder is Opened where we need to Store all our work and expand that Folder and
see the list of options available in that Folder as below:
Now, w e need to import that Target table into the Designer as per below steps:
à Click on Warehouse Designer ( Target Designer) icon from the top of Workspace
window which is rounded by Red Colored Circle in below diagram as:
Now go to Targets menuà click on Import From Database option, then new
window will open as below:
From that above window, Select
ODBC data source as: RRITEC_TARGET,
UserName as: Target_User
Passowrd as : Target_User
à Click on Connect and observe Under the Select Tables sub window which is
rounded by Red colored circle, expand that user, you will find list of available tables
under that user. Select the required table, here F_EMP table and click on OK, then that
table metadata is came into Designer window under Target Designer workspace as shown
below :
Step 5 : Creating Mapping :
Got to Designer tool, click on Mapping Icon which is indicated by Red colored Circle as
shown below :
Go to Mappings menu, click on Create option, then one small window will open as :
Give the mapping name as anything, here I m giving as m_FILTER .
Click on Ok.
In Repository under Navigator window, expand RRITEC folderà expand Sources folder
à expand Source_User tab, there we will get our source table. So, drag and drop that
source table EMP into the Workspace window , after dragging done, the designer window
is like below :
Whenever, if we drag Source table into the Workspace, one another table is also came by
default with the name as SQ_<source_table_name> . here SQ_EMP. Where SQ stands
for Source Qualifier.
Now, to create Transformation, go to Transformation tabà click on Create, one window
will open and it asks two values. Select Transformation type as Filter and Name of the
transformation is: t_SAL _GREATERTHAN_3000, as shown below:
Click on Createà click on Done. Now the designer window workspace will look like as:
Now, we need to give filter condition on this newly created Filter Transformation table.
Now select the required fields inSource Qualifier and drag and drop those fields from
Source Qualifier Table into Filter Transformation Table as below :
à Now, we need to gove the condition for the filter Tranformation table,For that click
on Header of that Filter Transformation table, then it will open one new window that is
Edit Transformation in that select Properties Tab and the window will look like as:
à Click on Filter Condition attribute Down arrow mark , which is rounded by Red
colored Circle as shown in above diagram,then Expression Editor window will open as
below :
à In that Expression Editor window, select Ports tabà double click on SALà from the
keyboard type >= 3000, click on Validate button , then it shows validation success
message, if the syntax and everything is proper as :
click on Okà click on Okà again Ok.
àNow, expand Target_User tab, there we will get our Target table. So, drag and drop
that Target table F_EMP into the Workspace window , after dragging done, the designer
window is like below :
à Now, map the required fields from Transformation Filter table into Target table as
below:
à Go to Repository tabà click on Save, and observe the Saved message that is either
Valid or Invalid the mapping message, we will gotunder Output window in the same
Designer Window tool.
à If want to check that whether the records are to be loaded into the target table after
execution of our workflow, go to SqlPlus, querying the table as:
> SELECT * FROM F_EMP;
We get the records only those records will meet the Condition as we specified in the filter
Transformation.
Filtering Rows with NULL values :
To filter rows containing null values or spaces, use theISNULL and IS_SPACES
functions to test the value of the port.
For Eg, if you wantto filter out rows that contain NULL value in theFIRST_NAME
port, use the following condition:
IIF ( ISNULL ( FIRST_NAME ) , FALSE , TRUE )
This condition states that if the FIRST_NAME port is NULL, the return value is FALSE
and the row should be discarded. Otherwise, the row passes through to the next
transformation.
Limitations of FilterTransformation:
i) Using Filter Transformation, we can load only one Target Table at a time.
ii) If we have five different conditions,then we need to create five different Filter
Transformations.
iii) W e cannot capture the False or Unsatisfied records of the condition in an table.
NOTE :
1) Use the Filter transformation early in the mapping :
To maximize session perform ance, keep the Filter transformation as close as
possible to the sources in the mapping. Rather than passing rows that you plan to discard
through the mapping, you can filter out unwanted data early in the flow of data from
sources to targets.
2) Use theSource Qualifier transformation to filter:
The Source Qualifier transformation provides an alternate way to filter rows.
Rather than filtering rows from within a mapping, the Source Qualifier transformation
filters rows when read from a source.
The main difference is that the source qualifier limitsthe row set extracted from
a source, while the Filter transformation limitsthe row set sent to a target. Since a source
qualifier reduces the number of rows used throughout the m apping, it providesbetter
performance.
However, the Source Qualifier transformation only lets you filter rows from
relational sources, while the Filter transformation filters rows from any type of source.
Also, note that since it runs in the database, you must make sure that the filter condition
in the Source Qualifier transformation only uses standard SQL. The Filter transformation
can define a condition using any statement or transformation function that returns either a
TRUE or FALSE value.
EXPRESSION TRANSFORMATION
*************************************
à It isPassive and Connected type of Transformation.
à Use the Expression Transformation to performAggregate calculations.
à It is best practice to place one Expression Transformation after Source Qualifier
Transformation and one Expression Transformation before the Target table, because, in
future, if we add any new column to the target which is independent to Source table, then
we need to disturb all the mapping., instead of that, if we use above best practice, its no
need to disturb all the mapping. Its enough to change at the Expression Transformation at
before of the Target table.
à Expression Transformation isuseful to derive the new columns from the Existing
Columns.
Mapping Flow Diagram:
( here the target table has fields of EmpNo, Ename, Job, Sal, Comm, Tax, Total Sal)
à Step-1, 2, 3 is same here also, so you can do it.
Step 4: Creating Target table and Import that metadata table into Designer tool :
à open SqlPlus and type the below to create a table as :
CREATE TABLE EXP_EMP
( EmpNo Number,
Ename Varchar2(30),
Job Varchar2(20),
Sal Number(p,s),
Comm Number(p,s),
Tax Number(p,s),
Total_Sal Number(p,s)
) ;
à gotoTargets menuà click on Import from Database optionà new window will open,
in that provide Target ODBC connection as :RRITEC_TARGET
Username as : TARGET_USER
Password as : TARGET_USER
As below :
SQ_EMP
(source)
SQ_EMP EXPRESSSION
T / F
à Tax=Sal * 0.1
à Total =
Sal+IIF(IsNull(Comm),0,Comm)
EXP_EMP
(target)
à click on Connectà the same above window will just look as differentà in that
expand the Target_User tab and expand the Tables tab which are marked by Brown color
and à select our target table ie., EXP_EMP table and clickon Ok.
as below:
Step 5: Creating Mapping :
à click on Mapping Designer icon from the tool barà goto Mappings menuà click on
Create option and give the mapping name as :m_EXP_EMP as shown below :
à click on Ok.à now drag and drop Source and Target tables into mapping window
workspace as below :
à Goto Transformations menuà click on Createà new window will get, in that select
type as Expression and give name as : T_Tax_And_TotSal as below :
à click on Createà click on Doneà Drag and Drop required columns( EmpNo,
Ename, Job, Sal, Comm) from SQ_EMP to T_Tax_And_TotSal transformation table as
shown below :
à Double click on header of Expression Transformationà new window will get, in that
click on Ports Tabà window will look as :
à select last port., ie., Commà click on two times on Add a New port icon which is
marked by Brown color, to add new two ports as below :
à rename those two new ports as one is Tax and another is TotSal and set both
Datatypes as Decimalà Disable the Input ports for both fields as :
à click on Expression icon, which is marked by brown color in aboveà new window
will get, where you need to write the Formula for Tax column as :
à click on Validate button, it will give the validation status message, if any sysntax
errors is there or not as :
à click on Okà Click on Okà similarly click on expression down arrow mark
corresponding to TotalSal and type the below expression as:
à click on validate to check for any syntax errors are there or not :
à click on Okà Click on Okà Click on Okà now connect corresponding ports from
Expression Transformation table to Target table as shown below :
à The remaining process to create the Session and create the Workflow and running the
Workflow from Task is same here also, so you can do that.
Note : In the Session Properties window., make sure that Change the Load Type from
default option ie., Bulk to Normal
à after successfully ran the Workflowà goto target table and observethe data whether
it is properly populated or not !
Performace Tuning :
As we know that there are three types of Ports.
Input, Output, and Variable.
Variable --- > this Port is used to store any temporary Calculation.
1) Use Operators instead of Functions.
2) Minimize the usage of String Functions.
3) If we use the Complex Expression multiple times in the Expression Transformer,
then make that expression as Variable. Then we need to use only this variable for
all computations.
RANK TRANSFORMATION
*******************************
à It is Active and Connected type of transformation.
à The Rank transformation differs from the transformation functions MAX and MIN, in
that it lets you select a group of top or bottom values, not just one value.
à you can designate only one Rank port in a rank Transformation.
à The Designer creates a RANKINDEX port for each Rank transformation
automatically. The Integration Service uses theRank Index port to store the ranking
position for each row in a group.
For example, if you create a Rank transformation that ranks the top five salespersons for
each quarter, the rank index numbers the salespeople from 1 to 5:
RANKINDEX SALES_PERSON SALE S
1 Sam 10,000
2 Mary 9,000
3 Alice 8,000
4 Ron 7,000
5 Alex 6,000
à The RANKINDEX is an output port only. You can pass the rank index to another
transformation in the mapping or directly to a target.
à We Cannot Edit or Delete the RANKINDEX port.
Rank Cache : there are two types of Caches here. They are Rank Data Cache and Rank
Index Cache.
Rank Data Cache: The index cache holds group information from the group by ports. If
we are Using Group By on DEPTNO, then this cache stores values 10,20, 30 etc.
All Group By Columns are in RANK INDEX CACHE. Ex. DEPTNO
Rank Index Cache: It holds row data until the Power Center Server completes the
ranking and is Generally larger than the index cache. To reduce the data cache size,
connect Only the necessary input/output ports to subsequent transformations.
Note : All Variable ports, if there, Rank Port, All ports going out from RANK
Transformations are stored in RANK DATA CACHE.
Mapping Flow Diagram :
( the target table R ank_Emp has fields of EmpNo, Ename, Sal, DeptNo, Rank)
à Step-1, 2, 3 is same here also., so you can do and use it.
Step-4 :Create and Import Target table as per above Mapping Flow Diagram using
SqlPlus.
Step-5: Creating Mapping :
Goto designer toolà click on mapping iconà goto Mappings menuà click on Create
à then new window will open and give the mapping name there as :
Click on Okà Drag and drop Source EMP table into mapping window workspace as
shown below :
EMP
(source)
SQ_EMP RANK T / F
(top 3 ranks)
RANK_EMP
(target)
à Go to Transformations menu à click on Createà select type as RANK and name it
as T_RANK as shown below :
à click on Createà click on Doneà from Source Qualifier, Drag and Drop EmpNO,
Sal, DeptNo into Rank Transformation table ie., T_RANK as shown below :
à Double click on header of the Rank Transformation tableà new will open, in that
click on Ports tabà enable the Checkbox of Rank ie., R[] of the SAL port because we
are finding the Rank based on the Rank, which is marked by Brown color as shown in
below :
à click on Properties tab on that same windowà select the Top/Bottom attribute as
value of Topà provide Number of Ranks option as 3 , the window as shown below :
à click on Okà drag and drop Target table ie., RANK_EMP into workspaceà connect
RANKINDEX of Rank Transformation table to Rank field in Target table and connect
all the remaining ports from T_RANK to corresponding ports at Target table as shown
below :
à the remaining process for creating the Session and creating the Workflow and
Running theworkflow from Task is same here also.
à after successfully ran the Workflow, go to taget table and observe that Rank is
correctly populated or not.
ROUTER TRANSFORMATION
************************************
à It is anActive and Connected transformation.
à Router Transformation is useful to load Multiple targets and handled Multiple
Conditions.
à It is similar to filter transformation. The only difference is, filter transformation drops
the data that does not meet the condition whereas router has an option to capture the data
that do not meet the condition. It is useful to test multiple conditions. It has input, output,
and default groups.
à If you need to test the same input data based on multiple conditions, use a Router
transformation in a mapping instead of creating multiple Filter transformations to perform
the same task.
à Likewise, when you use a Router transformation in a mapping, the Integration Service
processes the incoming data only once. When you use multiple Filter transformations in a
mapping, the Integration Service processes the incoming data for each transformation.
This will effect thePerformance.
Configuring Router Transformation :
The router transformation has input and output groups. You need to configure these
groups.
1) Input groups : The designer copies the input ports properties to create a set of output
ports for each output group.
2) Output groups : Router transformation has two output groups. They are user-defined
groups and default group.This have again two types of Groups. They are User-defined
Groups and Default Group.
2- i) User-defined groups : Create a user-defined group to test a condition based on
the incoming data.Each user-defined group consists of output ports and a group filter
condition. You can create or modify the user-defined groups on the groups tab. Create
one user-defined group for each condition you want to specify.
2- ii) Default group : The Designer creates only one default group when you create
one new user-defined group. You cannot edit or delete the default group. The default
group does not have a group filter condition. If all the conditions evaluate to FALSE, the
integration service passes the row to the default group.
Specifying Group Filter Condition :
Specify the Group filter condition on the groups tab using the expressioneditor.
You can enter any expression that returns a single value. The group filter condition
returns TRU E or FALSE for each row that passes through the transformation.
Mapping Flow Diagram :
( Both Target tables has fields as EmpNo, Ename, Job, Sal, DeptNo)
EMP
(Source)
SQ_EMP
(Source
Qualifier)
ROUTER
T / F
R_SAL_UPTO_2500
( Target-1 )
R_SAL_AFTER_2500
( Target-2 )
à Create two tables R_SAL_UPTO_2500 and R_SAL_AFTER_2500 from SqlPlus as :
CREATE TABLE R_SAL_UPTO_2500
( Eno Number(5),
Ename Varchar2(10),
Job Varchar2(10),
Sal Number(6,2),
DeptNo Number(10,2)
);
CREATE TABLE R_SAL_AFTER_2500
( Eno Number(5),
Ename Varchar2(10),
Job Varchar2(10),
Sal Number(6,2),
DeptNo Number(10,2)
);
Creating Mapping:
à Import Source and Target Tables into the Workspace window.
Go to Mapping designer workspace
à go to Mapping menuà click on Create, then below window will open and give the
mapping name as m_ROUTER as below :
Click on Okà drag and drop source and two target tables into the workspace window as
and mapping window will look as below :
Now, go to Transformation menuà click on Create, then new window will open as
below:
Click on Createà click on Doneà Copy and drag all the columns from Source
Qualifier table to Router Transformation table as per below :
Just keenly observe that, in the Router Transformation, we found that one INPUT,
NEWGROUP1 and DEFAULT label we got,which is marked by Red rectangle as
shown above.
à double click on Header of the Router Transformationà one new window ie., Edit
Transformations window will openà in that select Groups tab, which is marked by Red
rectangle à click on Add a New Group to this Router icon, which is also rectangled by
the Red color as shown below:
After click on that Add a New Group to this Router icon, the window will look as below :
On the NEWGROUP1 name, rename that with SAL_UPTO_2500à under Group Filter
Condition Field, click on Down arrow mark, which is circled by the Red colorà new
window will openthat is Expression Editoras below :
Delete thatdefault TRUE from the workspaceà go to Ports Tabà double click on SAL
field à type from the keyboard as <= 2500 à click on Okà click on Okà click on
Ok.
Now connect from NEWGROUP1 section fields to one target table and DEFAULT1
section fields to another Target table as shown in below:
The remaining process for creating Task and creating workflow is same here also.
After running the workflow and it is succeeded, go to database and give thequery test the
data of above two target tables, one table have to store all the details of employees whose
salaries Upto 2500 and the other table have to store all the details of employees whose
salaries are After/ Above 2500.
Advantages of Using Router over Filter Transformation :
Use router transformation to test multiple conditions on the same input data.
If you use more than one filter transformation, the integration service needs to process the
input for each filter transformation. In case of router transformation, the integration
service processes the input data only once and therebyimproving the performance.
SORTER TRANSFORMATION
**************************************
à It isActive and Connected type of transformation.
à It is useful to sort the dataaccording to the sort key specified, either in Ascending or
Descending order based on requirement.
à You can specify multiple columns as sort keys and the order (Ascending or
Descending for each column)
Properties :-
Distinct Output Rows - Can output distinct records if you check this option, suppose
If we enable this option, then obviously it will not return all the
Rows as there in Input.So in this case, this Sorter Transformation
acts as an Active Transformation.
Null Treated Low--- You can configure the way the Sorter transformation treats null
values. Enable this property if you want the Integration Service to treat null values as
lower than any other value when it performs the sort operation. Disable this option if you
want the Integration Service to treat null values as higher than any other value.
Case Sensitive --- The Case Sensitive property determines whether the Integration
Service considers case when sorting data. When you enable the Case Sensitive property,
the Integration Service sorts uppercase characters higher than lowercase characters.
Work Directory --- You must specify a work directory the Integration Service uses to
create temporary files while it sorts data. After the Integration Service sorts the data, it
deletes the temporary files. You can specify any directory on the Integration Service
machine to use as a work directory. By default, the Integration Service uses the value
specified for the $PMTempDir process variable.
Sorter Cache Size: - It is the max amount of memory used for sorter.
In version 8 it is set to Auto.
In version v7 it is 8MB
If it cannot allocate then the session will fail
If it needs more memory that it will pages the data in the W ork
Directory, and writes a warning in the log file.
Better Performance : Sort the data using sorter transformation before passing in to
aggregator or joiner transformation. As the data is sorted, the integration service uses the
memory to do aggregate and join operations and does not use cache files to process the
data.
Mapping Flow Diagram :
( Here, Target table has fields that whatever those Fields available for Source table )
à Create target table based on mapping flow diagram with name S_EMP as below :
CREATE TABLE S_EMP AS SELECT * FROM EMP WHERE 1=2 ;
Which is nothing the same structure that what we had in Source table ie., EMP
Creating m apping : in the mapping designer, go to mappings menuà click on create,
then small window will open as below :
Click on Okà drag and drop sourceand target tables into the Workspace windowà go
to Transformation menuà click on create , then small window will open as :
EMP
(source)
SQ_EMP SORTER T / F
(based onDeptNo)
S_EMP
(target)
Click on Createà click on Done, then mapping designer window will look as :
Now select and drop all the columns from Source Qualifier Transformation table to
Sorter Transformation Table , the mapping is like below :
à Now, double click on header of the Sorter transformationà click on Ports tabà
Enable the Check box under Key field corresponding to the DEPTNO , which is marked
by Red colored rectangle as shown below:
Click on Okà click on Save from Repository tab.
Next, creating the Session and Creating the Workflow is same for this also.
After running the workflow, go to backend and check the data.
Performance Tuning :
1) While using the sorter transformation, configure sorter cache size to be larger than
the input data size.
2) Configure the sorter cache size setting to be larger than the input data size while
Using sorter transformation.
3) At the sorter transformation, use hash auto keys partitioning or hash user keys
Partitioning.
AGGREGATOR TRANSFORMATION
******************************************
à It is Active and Connected type of Transformation .
à It used to perform calculations such as SUMS, AVERAGES, COUNTS etc., on
groups of data.
à The Integration service stores the data group and Row data in Aggregate Cache. The
Aggregator Transformation provides more advantages than the SQL, you can use the
conditional clauses to filter rows.
Components and Options of Aggregator Transformation:
Aggregate Cache : The Integration Service stores data in the aggregate cache until it
completes aggregate calculations. It stores group values in an index cache and row data in
the data cache.
Aggregate Expression : Enter an expression in an output port or Variable Port. The
expression can include non-aggregate expressions and conditional clauses.
Group by Port : This tells the integration service how to create groups. You can
configure input, input/output or variable ports for the group.The integration service
performs aggregate calculations and produces one row for each group. If you do not
specify any group by ports, the integration service returns one row for all input rows. By
default, the integration service returns the last row received for each group along with the
result of aggregation. By using the FIRST function, you can specify the integration
service to return the first row of the group.
Sorted Input : This option can be used to improve the session performance. You can use
this option only when the input to the aggregator transformation in sorted on group by
ports.
Note: By default, the Integration Service treats null values as NULL in aggregate
functions. You canchange this by configuring the integration service.
à When you run a session that uses an Aggregator transformation, the Integration
Service createsIndex Cache and Data Cache in memory to process the transformation.
If the Integration Service requires more space, it stores overflow values in cache files.
Aggregator Index Cache : The value of index Cache can be increased upto 24MB.The
index cache holds group information from the group by ports. If we are using Group By
on DEPTNO, then this cache stores values 10, 20, 30 etc.
Data Cache: DATA CACHE is generally larger than the AGGREGATOR INDEX
CACHE.
Columns in Data Cache:
i) Variable ports if any
ii) Non group by input/output ports.
iii) Non group by input ports used in non-aggregate output expression.
iv) Port containing aggregate function
à It can also include one aggregate function nested within another aggregate function,
such as:
MAX ( COUNT ( ITEM ) )
Nested Aggregate Functions:
You can include multiple single-level or multiple nested functions in
different output ports in an Aggregator transformation. However, you cannot include both
single-level and nested functions in an Aggregator transformation.
Therefore, if an A ggregator transformation contains a single-level function in
any output port, you cannot use a nested function in any other port in that transformation.
When you include single-level and nested functions in the same Aggregator
transformation. The Designermarks the mapping or mapplet invalid. If you need to create
both single-level and nested functions, create separate Aggregator transformations.
Null Values in Aggregate Functions:
When you configure the Integration Service, you can choose how you want
the Integration Service to handle null values in aggregate functions. You can choose to
treat null values in aggregate functions as NULL or zero. By default, the Integration
Service treats null values as NULL in aggregate functions.
Mapping Flow Diagram :
Best Practise : Whatever the Order of the Sorter Transformation Key positions, the same
positions we should maintain the same order in Aggregator Transformation.
EMP
(source) SQ_EMP
SORTER
T / F
AGGREGATOR
T / F
AGG_EMP
(target)
à Aggregator Transformation supports Conditional Sum.,ie., SUM( sal, sal<3000) , this
means to sum all the Salaries below the 3000 margin.
à Only twolevel Nested Aggregated Transformation is supported.
ie., for eg :MAX ( COUNT ( ITEM )) is valid
eg :MIN ( MAX ( COUNT ( ITEM )))
Process :
Create Target Tables as per Mapping Flow Diagram using Sql*Plus as below :
CREATE TABLE AGG_EMP
( DEPTNO NUMBER,
DEPT_WISE_SAL NUMBER) ;
and Import this table into Informatica Designer tool as a Target.
Go to mappings menuà click on create, then open small window as :
Click on Okà Drag and drop source and target tables into the workspace as shown
below :
Go to Transformation tabà click on Create, then small window will open as :
Click on Createà click on Done.
Drag and drop Deptno and Sal into Sorter Transformation from Source Qualifier
Transformation table as shown below :
Now, double click on header of Sorter transformation, then Edit Transformation window
will open and click on Ports tab as shown below :
Enable the Checkbox underK, corresponding DeptNo portname, which is marked by Red
color à click on Ok.
Go to Transformationà click on Create, then small window will open as :
Click on Createà click on Doneà copy and drag the DeptNo and Sal from Sorter
Transformation to Aggregator Transformation as shown below :
Double click on Aggregator Transformationà click on Ports tabà opened window is
like below:
Enable the checkbox under the Group By column corresponding to DeptNo portname as
shown in above.à click on Ok.
Select Sal column in above figà click on Add New Port icon which is marked by Green
colored circleà then new column is added under Port Name field and rename it as
DEPT_WISE_SAL à disable the Input checkbox forthe corresponding newly created
field à under expression field type SUM(sal) for the same fieldlike as below :
Now, go to Properties tab on the same above windowà enable the checkbox
corresponding Sorted Input filed, which is marked by Red
Click on Ok.
Now, connect DeptNo to DeptNo, Dept_Wise_Sal to the Dept_Wise_Sal from
Aggregator Transformation to the Target table as shown below :
Scenario -1 : Load the Target table using Conditional SUM feature for the same above
mapping.
Solution : all the steps remains same in this scenario, the only one change we need to do
is : go to Edit Transformation of the above Aggregate Transformation table, under
expression field for the DEPT_WISE_SAL port, type as SUM( sal, sal< 2500 ), which is
marked by Red colored rectangle as shown below :
The remaining process to create Session, creating workflow is same. Now go to backend
and check the target table data as querying SELECT * FROM AGG_EMP;
We get result as :
Empno Dept_Wise_Sal
-------- -------------------
10 1250
20 3480 like this .. that means it gives the resultas
Sum all the Salaries in Dept wise whichever Salaries are less than 2500.
Performance Tuning Tips :
1) Use sorted input to decrease the use of aggregate caches:
Sorted input reduces the amount of data cached during the session and im proves
session performance. Use this option with the Sorter transformation to pass sorted data to
the Aggregator transformation.
2) Limit connected input/output or output ports :
Limit the number of connected input/output or output ports to reduce the amount
of data the Aggregator transformation stores in the data cache.
3) Filter the data before aggregating it:
If you use a Filter transformation in the mapping, place the transformation before
the Aggregator transformation to reduce unnecessary aggregation.
LOOK -UP TRANSFORMATION
**************************************
à It isPassive and Connected / Unconnected type of transformation.
à A connected Lookup transformation receives source data, performs a lookup, and
returns data to the pipeline.
à Cache the lookup source to improve performance. If you cache the lookup source, you
can use a dynamic or static cache. By default, the lookup cache remains static and does
not change during the session. With a dynamic cache, the Integration Service inserts or
updates rows in the cache.
à When you cache the target table as the lookup source, you can look up values in the
cache to determine if the values exist in the target. The Lookup transformation marks
rows to insert or update the target.
Connected Look-Up Transformation :
The following steps describe how the Integration Service processes a connected Lookup
transformation :
1) A connected Lookup transformation receives input values directly from another
transformation in the pipeline.
2) For each input row, the Integration Service queries the lookup source or cache based
on the lookup ports and the condition in the transformation.
3) If the transformation is uncached or uses a static cache, the Integration Service returns
values from the lookup query.
4) If the transformation uses a dynamic cache, the Integration Service inserts the row
into the cache when it does not find the row in the cache. When the Integration Service
finds the row in the cache, it updates the row in the cache or leaves it unchanged. It flags
the row as insert, update, or no change.
5) The Integration Service passes return values from the query to the next transformation.
If the transformation uses a dynamic cache, you can pass rows to a Filter or Router
transformation to filter new rows to the target.
Mapping Flow Diagram :
( Target table has the fields say DeptNo, Dname, EmpNo, Sal)
à Create the Target table as per Mapping flow diagram. Till now, we had created the
target table using SqlPlus, but now, we will see that how to create the table from
Designer itself as below:
Go to Designer toolà click on Target Designer icon from the tool baras
goto Targets menuà click on Create, then small window opens as below :
Click on Createà Click on Done. Then one empty table will got in the workspace as
below :
EMP
(source table)
SQ_EMP Look-Up
Transformation
LKP_EMP
(target table)
Double click on Header of that above empty table, then another window will open, as
below :
In that window, goto Columns tabà click on Add new column icon which is marked by
red color symbol , here my target table has 4 ports, so I clicked 4 times on that red
marked icon,then new window will open as below :
Rename the column N ames as per our Target table in Mapping flow diagram and select
the corresponding Data type, the final window will look like as:
Click on Ok, then the target table appear in the Workspace as below:
Now, source and target tables are ready and imported into Workspace of Designer tool.
Go to Mappings menuà click on create and give the name as below window :
Click on Okà Drag and drop Source and Target tables into the workspace of Mapping
window as below :
Go to Transformation menuà click on Create, then window will openà in that select
Type as Lookup and name as T_LKP as :
Click on Createà then another new window will open as below :
In the above window, click on Import button, then two options will popup as shown
below :
In those two optionsà click on From Relational Table optionà it again open Import
Tables window as shown belowà in that provide ODBC connection details for Source
as below and click on Connectà Under Select Tables sub blockà select DEPT table
and click on Ok.
Click on Done on the above Create Transformation window. Now the mapping window
will look like as:
Connect DeptNo, E mpNo, S al from SQ_EMP to target table ie., LKP_EMPà drag and
drop DeptNo from SQ_EMP to T_LKP table and connect the Dname fromT_LKP to
LKP_EMP table as shown below:
à Double click on header of T_LKP tableà new window will openà click on
Condition tabà click on Add a New Condition icon marked by Red circleà make sure
that Lookup Table Column as DEPTNO and TransformationPort as DEPTNO1as :
à click on Ok.
à Now connect theDNAME from Lookup (T_LKP) to target (LKP_EMP) as below:
The remaining process for creating the Session, creating the Workflow is same., but at the
session properties tab, we need to select or map the Lookup Transformation connection
type to Source ODBC connection, which is similar what we done to map the Source
Table to Source ODBC connection and Target table is mapped to Target ODBC
connection, as of now.
Note : This is the only one Transformation that we need to map the Transformation
Table to ODBC connection at Session Properties window.That means only Look-Up
transformation will interact and talk to Database, no other Transformation can do this..
UNCONNECTED LOOK -UP TRANSFORMATION
******************************************************
à An unconnected Lookup transformation is not connected to a source or target. A
transformation in the pipeline calls the Lookup transformation with a :LKP expression.
The unconnected Lookup transformation returns one column to the calling
transformation.
à The following steps describe the way the Integration Service processes an
unconnected Lookup transformation:
1) An unconnected Lookup transformation receives input values from the result of a
:LKP expression in another transformation, such as an Update Strategy transformation.
2) The Integration Service queries the lookup source or cache based on the lookup ports
and condition in the transformation.
3) The Integration Service returns one value into the return port of the Lookup
transformation.
4) The Lookup transformation passes the return value into the :LKP expression.
Mapping Flow Diagram :
( the target table LKP_EMP has the fields say: DeptNo, Dname, EmpNo, Sal)
It will not be connected in Mapping Pipeline. It can be called in any Transformation. It
supports expressions.
à Using Unconnected Look-up, we will get only one output port.
à Create and Import Target table into Target designer window as per Mapping Flow
Diagram ( which is exactly same as above procedure )
à goto Mappings menuà click on createà name it as : m_UnConnected_Lookupà
click on Ok.
à Drag and drop Source EMP table into workspace of Mapping window .
à Goto Transformation menuà click on createà select Expression Transformation as
type and name it as : T_EXPà click on Createà click on Done as below :
EMP
(source table)
SQ_EMP Expression
Transformation
LKP_EMP
(target table)
UnConnected
Look-Up
Transformation
à from SQ_EMP table, drag and drop DeptNo, EmpNo, Sal into Expression
Transformation (T_EXP)as below :
à Double click on Expression Transformationà click on Ports tabà select DeptNo
column à click on Add New Port iconà name it as DNAMEà disable the Input( I )
port à click on Ok, as shown below:
à connect all the columns from Expression Transformation table to Target table as
below :
à go to Transformation menuà click on Createà select Look-Up transformation as
type and name it as T_LookUp as below:
à Click on Createà new window will open as below:
à Click on Source button à select the DEPT table from the sub windowà click on O.,
here there is no DEPT table, so we need to Import the table first from the databaseà
click on Okà click on Done.
à Double click on Look-Up transformationà click on ports tabà click onAdd New
Port à name it as IN_DEPTNO and datatype as Decimalà disable the Output( o ) port
as below:
à from the above window only, click on Condition tabà click on Add a new condition
icon à make sure that LookUp table column as DEPTNO and Transform ation Port as
IN_DEPTNO à click on Ok.
à Double click on Expression Transformationà click on Dname Expression icon as
below and provide the expression as : LKP . T_LOOKUP( DeptNo )
Click on Okà again click on Okà save it.
Note :Suppose, If we have more than one mapping flow, we can possible to call the Same
Unconnected Look-Up again.
Performance Tuning Tips :
1) Add an index to the columns used in a lookup condition:
If you have privileges to modify the database containing a lookup table, you can
improve performance for both cached and uncached lookups. This is important for very
large lookup tables. Since the Integration Service needs to query, sort, and compare
values in these columns, the index needs to include every column used in a lookup
condition.
2) Place conditions with an equality operator (=) first:
If you include more than one lookup condition, place the conditions in the
following order to optimize lookup performance:
Equal to (=)
Less than (<), greater than (>), less than or equal to (<=), greater than or equal to (>=)
Not equal to (!=)
3) Cache small lookup tables:
Improve session performance by caching small lookup tables. The result of the
lookup query and processing is the same, whether or not you cache the lookup table.
4) Join tables in the database:
If the lookup table is on the same database as the source table in the mapping and
caching is not feasible, join the tables in the source database rather than using a Lookup
transformation.
5) Use a persistent lookup cache for static lookups:
If the lookup source does not change between sessions, configure the Lookup
transformation to use a persistent lookup cache. The Integration S ervice then saves and
reuses cache files from session to session, eliminating the time required to read the
lookup source.
6) Call unconnected Lookup transformations with the :LKP reference qualifier:
When you write an expression using the :LKP reference qualifier, you call
unconnected Lookup transformations only. If you try to call a connected Lookup
transformation, the Designer displays an error and marks the mapping invalid.
7) Configure a pipeline Lookup transformation to improve performance when processing
a relational or flat file lookup source:
You can create partitions to process a relational or flat file lookup source when you
define the lookup source as a source qualifier. Configure a non-reusable pipeline Lookup
transformation and create partitions in the partial pipeline that processes the lookup
source.
SEQUENCE GENERATOR TRANSFORMATION
*******************************************************
OLTP Data OLAP Data
in the first table Enum acts as a Primary Key, whereas, in the second table Primary key
is W_ID, it is nothing but Warehouse Id. It is also called Surrogate Key.
Surrogate Key :it is the key which is acting as a Primary Key in Datawarehouse.
à Sequence Generator is useful to generate the Surrogate Key and itis useful to generate
some random number as Surrogate Key Values.
à It is Passive and Connected type of transformation. It contains two output ports
that you can connect to one or more transformations.
à The Integration Service generates a block of sequence numbers each time a block of
rows enters a connected transformation. If you connect CURRVAL, the Integration
Service processes one row in each block.
à When NEXTVAL is connected to the input port of another transformation, the
Integration Service generates a sequence of numbers. When CURRVAL is connected to
Enum Location Year
101 Hyd 2002
101 Chennai 2006
101 Banglore 2011
W_ID Enum Ename Year
1 101 Hyd 2002
2 101 Chennai 2006
3 101 Banglore 2011
the input port of another transformation, the Integration Service generates the NEXTVAL
value plus the Increment By value.
à You can make a Sequence Generator reusable, and use it in multiple mappings. You
might reuse a Sequence Generator when you perform multiple loads to a single target.
For example, if you have a large input file that you separate into three sessions
running in parallel, use a Sequence Generator to generate primary key values. If you use
different Sequence Generators, the Integration Service might generate duplicate key
values. Instead, use the reusable Sequence Generator for all three sessions to provide a
unique value for each target row.
à You can complete the following tasks with a Sequence Generator transformation:
1) Create Keys
2) Replace Missing Values
3) Cycle through a Sequential range of Numbers
Creating keys is easier thing, anyway we will see in below mapping.
Replace Missing Values: Use the Sequence Generator transformation to replace
missing keys by using NEXTVAL with the IIF and ISNULL functions.
For example, to replace null values in the ORDER_NO column, you create a
Sequence Generator transformation with the properties and drag the NEXTVAL port to
an Expression transformation. In the Expression transformation, drag the ORDER_NO
port into the transformation along with any other necessary ports. Then create an output
port, ALL_ORDERS. Under ALL_ORDERS, you can then enter the follow ing
expression to replace null orders:
IIF ( ISNULL ( ORDER_NO ), NEXTVAL, ORDER_NO)
à The Sequence Generator transformation has two output ports: NEXTVAL and
CURRVAL. You cannot edit or delete these ports. Likewise, you cannot add ports to the
transformation.
Mapping Flow Diagram :
( Fields of Target table is : Row_Wid, EmpNo, Ename, Sal )
EMP
(Source Table)
Sequence
Generator T/F
SEQ_EMP
(Target Table)
à Create the Target table SEQ_EMP as per known technique. Now, we go to SqlPlus
and create the Target table as per below :
> CREATE TABLE SEQ_EMP
( Row_Id NUMBER,
EmpNo NUMBER,
Ename VARCHAR2(20),
Sal NUMBER
)
à import thattarget table into Designer toolà now, drag and drop Source and Target
table into workspace of Mapping window as below :
Click on Ok.à drag and drop Source and Target tables into Workspace of mapping
window as below :
à connect EmpNo, Sal from SQ_EMP to target SEQ_EMP table as below:
à click on Transformation menuà click on Createà select type as Sequence
Generator and name it as t_Seq as below :
à Click on Create à click on Done , the mapping window workspace will look as:
à Connect NextVal of transformation table to Row_Id of target table as below:
à Save the mappingà the remaining process for creating the session and creating the
Workflow is same here also and execute the workflow and observe the Target Table.
JOINER TRANSFORMATION
*************************************
à It is Active and Connected type of transformation.
à Use the Joiner transformation to join source data from two related heterogeneous
sources residing in different locations or file system s. You can also join data from the
same source. The Joiner transformation joins sources with at least one matching column.
Note: The Joiner transformation does not match null values. For example, if both
EMP_ID1 and EMP_ID2 contain a row with a null value, the Integration Service does not
consider them a match and does not join the two rows. To join rows with null values,
replace null input with default values, and then join on the default values.
à The Joiner transformation supports the following types of joins :
1) Normal
2) Master Outer
3) Detail Outer
4) Full Outer
Note: A normal or master outer join performs faster than a full outer or detail outer join.
Normal Join: With a normal join, the Integration Service discards all rows of data from
the masterand detail source that do not match, based on the condition.
Master Outer: A master outer join keeps all rows of data from the detail source and the
matching rows from the master source. It discards the unmatched rows from the master
source.
Detail Outer : A detail outer join keeps all rows of data from the master source and the
matching rows from the detail source. It discards the unmatched rows from the detail
source.
Full Outer Join: A full outer join keeps all rows of data from both the master and detail
sources.
à Generally, if we join two tables from source, obviously we will get two Source
Qualifier Tables came out for individual Sources and combine them and pointing to
Target table as below :
But in the above case, both the sources are came from same database ie., Oracle, so we
don t need to take separate Source Qualifier for each source table., instead of that we
delete one source qualifier and take only one Source Qualifier and map both the sources
to this and connect it to target as below diagram.
Joining Two Tables Using single Source Qualifier :
Mapping Flow Diagram :
( Here Target table has fields of EmpN O, Dname, Job, Sal, DeptNo )
EMP
(Oracle Data base)
DEPT
(Oracle Data base)
SQ_EMPDEPT
JOINER_EMP
(Target Table)
EMP
(Oracle Data base)
DEPT
(Oracle Data base)
SQ_EMP
SQ_DEPT
JOINER_EMP
(Target Table)
à Normal Join in our Informatica is equivalent to Equi-Join as in Oracle Database level.
à Master Outer Joins gives all the records of Detail Table( in this case ie., EMP ), not
gives all the records of Master table ( in this case ie., Dept )
silly note : e . deptno = d. deptno (+) ---- > this is Right Outer Join and this gives all
the Data of employee table (e)
e . deptno (+) = d . deptno ----- > this is Left Outer Join and this gives all
the Data of Department table (d).
à create the Target table as per mapping flow diagram. Go to Sql Plus and create the
table as below :
> CREATE TABLE JOINER_EMP
( EmpNo Number,
Dname Varchar2(20),
Job Varchar2(20),
Sal Number,
DeptNo Number
);
à go to Mappings menuà click on Createà name it as m_JOINER as below :
à Drag and dropSource tables EMP and DEPT table and target table JOINER_EMP
into the mapping window workspace and window will look like as below:
à in our case, both source tables are coming from Same Database, so we don t need two
separate Source Qualifiers as we discussed previously, so just delete any one Source
Qualifier , but here iam deleting SQ_DEPT table, then the window will look as below:
à Double click on Header of SQ_EMP table, the it will open one windowà in that click
on Rename buttonà again small window will openà in that type name as
SQ_EMPDEPT for better clarity purpose as shown below:
à click on Ok.
à Drag and drop DeptNo, Dname, Loc from DEPT table to SQ_EMP table then window
will look like as:
à Map the corresponding ports from SQ_EMPDEPT table to JOINER_EMP table as
shown below:
à now, double click on SQ_EMPDEPT transformationà new window will openà in
that click on Properties tabà in Sql Query option click on corresponding down arrow
mark, which is rectangled by Brown color as shown below:
à After click on that down arrow markà another window will open as below:
In that provide ODBC datasource as : RRITEC_SOURCE
Username as : SOURCE_USER
Password as : SOURCE_U SER
à then click on Generate SQL buttonà then automatically one query will popup into
the workspace of SQL sub-window , which is marked by Green Color in below window.
Then window will look like as below:
Click on Okà again click on Ok.
The Remaining process to create Session , to create Workflow and running the Workflow
is same here also. After ran the workflow, goto target table and preview the data whether
it is populated correctly or not.
Joining one Databse table and one Flat File :
Silly N ote : whatever the table we drag and dropped into workspace as second attempt,
then Inforamtica designer automatically treat that table as : MASTER
à Open Notepad and type as:
Deptno,Dname,Loc
10,Hr,Hyderabad
20,Sales,Chennai
30,Stock,Pune
Click on Save with name as DEPT.txt and place this notepad file at below location:
F:  Informatica  Powercenter 8.6.0 Server  Infa_Shared  SrcFiles.
à We had already created the Target table of JOINER_EMP, so you can use the sam e
table in this scenario also:
à Import the Source table andsource Notepad file into Informatica Designer tool.
à Goto Mapping Designer Windowà click on Mappings menuà click onCreate and
name it asm_JOINER_EMP and drag and drop Source EMP, sourceFlat-File and target
JOINER_EMP into the workspace asshown below:
à goto Transformations menuà click on Createà select transformation type as
Joiner and give name as T_JOIN as below:
à click on Createà click on Done, then mapping window looks as below:
à Drag and Drop Ename, Sal, DeptNo from SQ_EMP tableand Drag and drop Deptno,
Dname from SQ_DEPT to to T_JOINER transformation table as below:
à Double click on Joiner transformation, then Edit Transformations window will open
à in that click on Ports tab, make sure that the table coming from Source side having
less number of Records is treated as Master and Corresponding Master port is Enabled in
checkbox format as below:
à click on Condition tab on the same above window as:
à click on Add a new condition icon which is marked by Red color, from the above
window as and select and make sure that Condition as below Red mark
à click on Ok à connect all the required ports from Joiner Transformation ports to
Target table ports as below:
à click on Save.
à goto Powercenter wokflow manager windowà click on Workflows menuà click on
Create à name it as : w_Hetro_Join , then window will look as:
Click on Okà goto Tasks menuà click on Create and give name as S_Hetro_Join
As below:
à Click on Create , then window will open , in that need to select our mapping as below
:
à click on Okà click on Doneà then connect the Session and Workflow using Link
Task and the workspace will look as:
à now double click on Sessionà Edit Tasks window will openà in that click on
Mapping window as shown below:
à from the Navigator window at left side of above diagramà click on SQ_EMP and the
corresponding Relational DB -connection is mapped to SOURCE as shown below:
à similarly select SQ_DEPT from the leftside Navigator windowà then the
corresponding properties will appear at right side
à in that select Import Type as File
Source File type as: Direct
Source File Directory as : $PM Source File Dir
Source File Name as : DEPT.txt
à similarly select the target table from the leftside navigator and map the corresponding
Relational DB -connection to TARGET
à click on Okà and right click on Workflow and click on Start Workflow from Task
option and goto Target table and observe the target table data.
Joining Two Flat Files using Joiner Transformation :
à Open notepad and type the below data as :
Deptno,Ename,Sal
10,Kishor,1000
20,Divya,2000
30,Venkat,3000
Save the above notepad file with the name of EMP.txt
à similarly open another notepad and type the below data as :
Deptno,Dname,Loc
10,Hr,Hyderabad
20,Sales,Chennai
30,Stock,Pune
Save the above notepad file with the name of DEPT.txt
à Now place the above two notepadfiles in the below location as:
F:  Informatica  Powercenter 8.6.0 Server  Infa_Shared  SrcFiles
à Goto Mapping Designer windowà goto Mappings menuà click on C reate à give
the mapping name as : m_Join_FlatFiles as below :
Click on Okà now drag and drop the above two Source FlatFiles and the Target Table,
then the mapping window looks as:
à goto Transformations menuà click on Createà select the Transformation type as
Joiner and give the name as : T_Join_FlatFilesas below :
à click on Createà Click on Doneà then drag drop Deptno, Ename and Sal from
SQ_EMP and Deptno, Dname from SQ_DEPT to the T_Join_FlatFiles transformation
table as shown below:
à Double click on Header of the Join transformation the Edit Transformation window
will open , in that click on Condition tabà make sure the condition as in below window
with red marked rectangle as:
à click on Okà then connect the corresponding ports from Joiner Transfromation table
to the target table as below:
à the remaining process to create the Workflow and create the Task and setting the
properties of the Task is same as above
à after running the workflow, goto target table and view the data whether it is properly
populated or not.
Loading one Source Flat File data into another Target Flat File :
à in the Designer tool, click on Target Designer iconà goto Targets menuà click on
Create à name it as FlatFiles_Target and select aDatabase type as : Flat File as shown
below :
à click on Createà Click on Done.
à Double click on the Header of that above target source file tableà then Edit Tables
window will openà click on Columns tab
à click on Add a New column to this table three times(because we need three fields at
target table) then the above window will look as:
à Name those fields as Deptno, Ename, Sal and set the appropriate datatypes as
shown in below:
à click on Okà click on Mapping Designer iconà goto Mappings menuà click on
Create à give the mapping name as m_FlatFiles_Target as shown below :
à Click on Okà now drag and drop source EMP.txt and target FlatFiles_Target.txt
file into the mapping window workspace as shown below :
à Goto W orkflow Manager toolà goto Workflows menuà click on Createà give the
name as : w_FlatFiles_Target and click on Ok.
à Create the session with the name of s_FlatFiles_Target and click on okà now
connect the Workflow and Session using Link Taskà now run the Workflow.
à After running the Workflow , goto backend location of
F:  Informatica  Powercenter 8.6.0 Server  Infa_Shared  SrcFiles
And observe that whether the Target Flat file is created or not .
Loading a List of Flat Files into a Table Using Indirect Method :
à Open a Notepad and type the below data as :
Deptno,Ename,Sal
10,Kishor,1000
20,Divya,2000
30,Shreya,3000
Save this file with the name of EMP.txt
à similarly, open one more Notepad and type as below :
Deptno,Ename,Sal
40,Swetha,4000
50,Sweety,5000
60,Alekhya,6000
Save this file with the name of EMP1.txt
Now place the above two Notepads in the below location of
C:InformaticaPowerCenter8.6.0serverinfa_sharedSrcFiles
à similarly, open one more notepad and type above two notepad s path location as
below :
C:InformaticaPowerCenter8.6.0serverinfa_sharedSrcFilesEMP.txt
C:InformaticaPowerCenter8.6.0serverinfa_sharedSrcFilesEMP1.txt
Save this notepad with the name of E MPLIST .txt and place this notepad also in the
same location of SrcFiles.
à Open Designer toolà Import EMP as a Sourceà create and import target table with
the name of T_ListofFiles having fields of Deptno, Ename, Sal usingSqlPlus.
à Create a Mapping with thename of m_Load_ListofFiles. Drag and drop source(EMP
is enough) and target into the workspace and connect the corresponding Ports as shown
below :
à goto Workflow managerà create a workflow with the name of w_T_ListofFiles à
create a Task with the name of s_T_ListofFiles à connect the Workflow and session
à Double click on Sessionà click on mappings tabà select the SQ_EMP from the
leftside navigator window and the corresponding properties will appear at rightside.
In that select Source File Type as: Indirect
Source File Directory as: $PM Source File Dir
Source File Name as: EMPLIST.txt
Click on Okà save it and run the workflow
à goto table T_ListofFilesand observe the data.
XML SOURCE QUALIFIER TRANSFORMATION
*****************************************************
à It is an Active and Connected type of transformation .
à Open notepad and type the below data as:
< EMP >
< ROWS >
<DEPTNO>10</DEPTNO>
<ENAME>RamReddy</ENAME>
<SAL>1000</SAL>
</ROWS>
<ROWS>
<DEPTNO>20</DEPTNO>
<ENAME>Venkat</ENAME>
<SAL>2000</SAL>
</ROWS>
<ROWS>
<DEPTNO>30</DEPTNO>
<ENAME>Divya</ENAME>
<SAL>3000</SAL>
</ROWS>
</EMP>
à Save the above file with the name as EMP.xml and place that file in the below location
as :
C:InformaticaPowerCenter8.6.0serverinfa_sharedSrcFiles
à Open the Informatica Designer toolà click on Source Analyzer icon from the tool
bar à click on Sources menuà click on Import From XML definitionà new window
will openà from that navigate to our Src Folder and select the newly created XML file
as shown below :
à click on Openà then one warning window will came as below:
à click on Yesà again another window will came as below:
à if you have time, then read all those above options , otherwise simply click on Okà
new window will open as :
à click on Nextà new window will open , in that select Hierarchy Relationship radio
button and select De-normalized XML views options as below:
à click on Finishà if we go and observe in the Source Analyzer window, we will get
the table structure as somewhat differentnormal database table structureas :
à create a Target table with name as XML_EMP with the columns of DEPTNO,
ENAME, SAL and import into Informatica Designer tool as:
> CRE ATE TABLE XML_EMP
( Deptno Number(5),
Ename Varchar2(30),
Sal Number(10,2)
) ;
à click on Mapping designer iconà goto Mappings menuà click on Createà name it
as m_XML as:
à click on Ok à from the leftside Navigator window, expand Sourcesà expand
XML_File tabà drag and drop EMP into workarea , also drag and drop Target into
workarea as:
à connect corresponding ports from XML source qualifier to Target table as:
à goto Workflow Manager toolà create a workflow with the name w_XML and
create a Session with the name of s_XML and connect the both using Link Task.
à double click on Sessionà click on Mappings tabà select XMLDSQ_EMP from
leftside navigator window and provide Source File directory as:
$PMSourceFileDir 
Source file name as EMP. XML
Source file type as Direct as shown below:
à select target XML_EMP table and map the proper Target connectionà click on Ok
and run the Workflow.
NORMALIZER TRANSFORMATION
********************************************
à It isActive and Connected Transformation
à It is Useful to read Data from COBOL files.
à It is associated with COBOL file.
à It is useful to convert Single I/P record into Multiple O/P records.
The below table is of De-Normalized Form :
The below table is of Normalized Form :
In the above table, the field Quarter is nothing but GCID (generated column Id)
à Open one notepad and type as below :
Year,Account_Type,Q1,Q2,Q3,Q4
2012,Savings,1000,2000,3000,4000
Save the notepad and place this file in the below location as
C:InformaticaPowerCenter8.6.0serverinfa_sharedSrcFiles
à Create a target table with the columns of Year, Account_Type,Quarter,Amount with a
table name as T_EMP_NORMALIZER using SqlPlus as :
Year Account_Type Quarter Amount
------- ------------------ ----------- -----------
2012 Savings 1 1000
2012 Savings 2 2000
2012 Savings 3 3000
2012 Savings 4 4000
Year Account_Type Q1 Q2 Q3 Q4
------- ----------------- ------------ ------------ ----------- -----------
2012 Savings 1000 2000 3000 4000
> CREATE TABLE T_EMP_NORMALIZER
( Year, Number(5),
Account_Type Varchar2(30),
Quarter Number(3),
Amount Number(10,2)
) ;
and import that table into Designer tool.
à Goto Designer toolà click on Mapping Designer window iconà goto M appings
menu à Create a mapping with the name of m_Normalizer as shown below:
à click on Okà Drag and Drop Source flat file and target table into the mapping
window workspace.
à goto Transformations menuà click on Createà select transformationtype as
Normalizer and give name as T_Normalizer as below:
à click on Createà click on Doneà now the mapping window will look as:
à Double click on Normalizer transformationà Edit transformations window will open
as :
In the above window , click on Normalizer tab and click on Add a new column to this
table icon of three times to create three ports, which are marked by Red color in the
above window, now the resultant window will look as:
à change the Column Names and Datatypes and Occurs field as per below:
à click on Okà then automatically in the Normalizer Transformation table has fileds in
a different manner that what we regularly seen at the mapping window workspace as:
à connect the ports from SQ_EMP to T_NORMALIZER and connect the ports from
T_NORMALIZER table to target T_EMP_NORMALIZER as shown below:
UPDATE STRATEGY TRANSFORMATION
************************************************
à It is Active and Connected Transformation
à Target table should containPrimary Key
Implimenting SCD1 :
( SCD is useful to maintain Current Data)
à It should maintain Source Data and Target Data as Same.
Mapping Flow Diagram :
EMP SQ_EMP LookUp
T / F
Expression
T / F
Router T/F Expression
T/F
Update Strategy
Transformation
(for Update Flow)
SCD1
(target)
Expression
T/F
Update Strategy
Transformation
(for Insert Flow )
SCD1
(target)
Sequence
Transformation
à Target table SCD1 having the Fields as EmpKey, EmpNo, Ename, Job, Sal
Goto SqlPlus and create the target table as below :
Ø CREATE TABLE SCD1
( EmpKey Number(5),
EmpNo Number(5),
Ename Varchar2(30),
Job Varchar2(30),
Sal Number(5)
) ;
Import this table into Informatica Designer tool.
à Goto Mappings menuà click on createà give the name as m_SCD1_EMP as:
à click on Ok--> drag and drop Source EMP table and Target table into the mapping
window workspace as :
à Goto Transformations menuà click on Createà select the transformation type as
LookUp and name it as T_LKP_SCD_UPDATE as shown below:
à click on Createà open one window , in that click on Target buttonà select our
target table ie., SCD1 from the below available list of tables as shown below:
à click on Ok à click on Doneà the mapping window will looks as:
à Drag and drop EmpNo port form SQ_EMP to LookUp transformation table as :
à Double click on LookUp transformation tableà click on Conditions tabà click on
Add a New Condition iconà make sure the Condition as below :
à Click on Ok .
Note :if we get any errors after clickon Ok button in the above window, then just change
the datatype of EMPNO to decimal in the LookUp transformation table.
à Goto Transformations menuà click on Createà select type as Expression and give
name as EXP_B4_ROUTER_UPDATE and drag and drop EmpNo, Ename, Job, Sal
from Lookup Transformation to Expression Transformation table as below:
à from the Look-Up transformation , drag and drop EmpKey to Expression
transformation as:
à double click on Expression Transformationà click on Ports tabà click on Add a
new Port icon twice to create two ports as:
NEW_RECORD ----- Output ------- exp : IIF (ISNULL (EMPKEY ), TRUE , FALSE )
UPDATE_RECORD ---Output---exp: IIF(NOT ISNULL (EMPKEY ), TRUE , FALSE )
as shown below:
à click on Okà drop Router Transformationà drag and dropall the ports from
Expression Transformation to Router Transformation as:
à double click on Router Transformationà click on Groups tabà click on Add a
New Group icon twice to create two groups and give conditions as:
NEW_RECORD NEW_RECORD = TRUE
UPDATE_RECORD UPDATE_RECORD = TRUE
as shown below:
à click on Okà now the mapping will look like as:
New Record Flow :
à drop three Transformations of type Expression, Update Strategy, Sequence Generator
into the mapping window workspaceà drag and dropports of EMPNO1, ENAME1,
JOB1, SAL1 fromNEW_RECORD groupinto Expression Transformation as:
à drag and drop all the above four ports from Expression Transformation to Update
Strategy transformation as:
à double click on Update Strategy Transformationà click on Properties tabà provide
Update Strategy expression as 0 as:
à click on Okà connect all the ports from Update Strategy to Target table and connect
NEXTVAL port from Sequence Generator Transformation to EMPKEY of target table as
Update Record Flow :
à drop Update Strategy, Expression transformations into the above Mapping Workspace
à in the above Mapping, from the Router Transformation , drag and drop the ports
EMP KEY3, EMPNO3, ENAME3, JOB3, SAL3 ofUPDATE_RECORD group into
Expression Transformation as:
à drag and drop all the ports from Expression Transformation to Update Strategy
Transformation as:
à double click on Update Strategy Transformationà click on Properties tabà set the
Update Strategy Expression value as 1 as shown below:
à click on Okà copy and paste the target table into the mapping workspace, for the
purpose of Updated Record flowà and connect all the Ports from Update Strategy
Transformation to another Target as :
à goto Workflow Manager toolà create a Workflow with the name of w_SCD1 à
create a Session with the name of s_SCD1 and map this task to our above Mapping
m_SCD1_EMP à click on Okà connect Task and Workflow using Link Task.
à double click on Sessionà click on Mapping tabà select SQ_EMP and set that to
SOURCE connectionà select Target and set that to TARGET connectionà select
Look-Up transformation and set that to TARGET connection à click on Properties tab
à set the Treat Source Rows as Data Driven à click on Okà Save the Task and
Run the Workflow and observe the Target table data.
Testing :
à goto SqlPlusà login as TARGET_USER;
Ø INSERT INTO SOURCE_USER. EMP ( Empno, Ename, Job, Sal)
VALUES ( 102, Kishor , Manager , 90000);
Ø COMMIT ;
> UPDATE SOURCE_USER . EMP
SET Sal = 5001 WHERE Empno = 7839;
> COMMIT ;
à go and run the Workflow now and observe the Result.
Exercise -1 : Try to Impliment above scenario using below Mapping Flow Diagram :
Condition to be used in the Update Strategy is: IIF( ISNULL (EMPKEY ) , 0 , 1 )
Exercise 2 : Impliment the above SCD1 using inbuilt Slowly Changing Dimension
Wizard.
SLOWLY CHANGING DIMENSION 2
************************************************
à Create a Target table with the name of EMP_SCD2 with columns EMPKEY (pk),
EMPNO, ENAME, JOB,DEPTNO, SAL, VERSION and import into Informatica
Designer tool as:
> CREATE TABLE EMP_SCD2
( Empkey number(5) Primary Key,
Empno number(5),
Ename varchar2(20),
Job varchar2(20),
Deptno number(5),
Sal number(10,2),
Version number(5)
) ;
à create a Mapping with the name of m_SCD2 as :
à click on Okà drag and drop Source and Target tables into the Mapping workspace as
EMP SQ_EMP Update Strategy
T/F
Target
Look Up T/F
à drop Look-Up transformation and Use Look-Up table as Target table as :
à click on Createà new window will open in that Click on Target buttonà select
EMP_SCD2 ass below :
à click on Okà from Source Qualifier transformation , drag and drop EMPNO into
Look-Up transformation as:
à double click on Look-Up transformationà click on Ports tabà and change
Datatypes from Double to Decimal(if it is in Double datatypeinitially) à click on
Condition tabà click on Add a new Conditionà make sure that condition as
EmpNo = EmpNo1 as :
à click on Okà drop Expression Transformation into mapping windowà drag and
drop EMPNO, ENAME, JOB, SAL, DEPTNO from Source Qualifier Transformation
into ExpressionTransformation as :
à drag and drop EMPKEY, SAL, VERSION from Look-Up transformation into
Expression transformation as:
à double click on Expression Transformationà click on Ports tabà select last portà
click on Add a new port to this transformation icon twice to create two portsà name
those two ports as:
à click on Okà now the mapping window will look as:
à drop a Router Transformationà drag and drop all the ports from Expression
Transformation to Router Transformation as:
à double click on Router Transformationà click on Groups tabà click on Add a
new group to this transformation icon twice to create two groups and name those two
groups as NEWRECORD and UPDATEGROUP as below:
à click on Ok
New Record Flow :
à drop Expression, Sequence Generator, Update Strategy transformations into mapping
workspace as:
à drag and drop EMPNO, ENAME, JOB, SAL, DEPTNO of New Record section
from Router Transformation to Expression Transformation as:
à double click on Expression Transformationà click on Ports tabà select last portà
click on Add a new port iconà and rename that new port as VERSION , data type
as : decimalà give value as 0 to the expression as:
à click on Okà from Expression Transformation, drag and drop all the ports to Update
Strategy Transformation à connect corresponding ports from Update Strategy to Target
table à connect NEXTVAL port from Sequence Generator transformation to
EMPKEY of target table as :
Update Record Flow :
à Drop Expression transformation and Update Strategy Expression into workspaceà
from Update Record Group of Router transformation, drag and drop EMPNO3,
ENAME3, JOB3, SAL3, DEPTNO3, VERSION3 into the expression Transformation as:
à double click on Expression transformationà click on Ports tabà select last portà
click on Add a new port iconà name that port as VERSION , datatype as Decimal,
under expression type as VERSION3+1 à click on Ok
à click on Okà from Expression Transformation, drag androp EMPNO, ENAME,
JOB, SAL, DEPTNO, VERSION to Update Strategy Transformationà copy and paste
the Target table in the mapping workspace to store Update Record related dataà connect
corresponding ports from Update Strategy transformation to Target table à connect
NEXTVAL port from Sequence Generator transformation to EMPKEY of Target table as
:
à create a Workflow with the name of w_SCD2 and create a task with the name of
s_SCD2 à double click on Sessionà click on Mappings tabà make sure that Source
and Target connections as Appropriateà make sure that Look-Up transformation
connection is to Target and run the Workflow.
Testing :
à in the Source EMP table, add one more record using Insert statementà update any
Existing record in the same source tableà now run the above workflow and observe the
output.
Exercise : Use the In-Built SCD2 wizard and built the Mapping.
SLOWLY CHANGING DIMENSION - 3
************************************************
à it is Useful to maintain Current data plus one level previous data.
Process Using Wizard only :
à go to Mappings menuà select Wizards optionà click on Slowly Changing
Dimension option then one window will open in that provide name as SCD3 à select
last radio button (Type-3 dimension) as:
à click on Nextà select source table as : SOURCE_USER . EMP and give new target
table name as EMP_SCD3 as:
à click on Nextà select EMPNO column from Target table fields section click on
Add>> button then that column will go and place in the Logical Key Fields section
as :
à select SAL column from Target Table Fields section and click on Add>> button
and that column will go and place in the F ields to compare for changes section as :
à click on Nextà click on Finishà then automatically one mapping will form and
placed in the Mapping window workspace as:
à save the mappingà click on Target Designer workspace iconà from the leftside
Navigator window, expand targets tab à drag and drop EMP_SCD3 table into target
designer workspaceà goto Targets menuà click on Generate and Execute SQL
option , new window will open as:
à in the above, click on Generate and Execute option , new small window will open ,
in that provide ODBC connection to target database, username and password for the
target database need to give as:
à click on Connectà click on Closeà save it
à create a workflow with the name of w_SCD3 and create a task with the name of
s_SCD3 and map the Session and Workflow using Link Task.
à double click on Taskà provide database connections appropriately to both source
and targetà click on Okà save the workflow and run the Workflow.
Testing :
à login to SOURCE_USER schema using Sql Plus, give statements as:
> UPDATE EMP SET SAL=10000 WHERE EMPNO = 7839 ;
> COMMIT ;
Exercise : Use Router Transformation inplace of Filter Transformation and develop
SCD3.
SOURCE QUALIFIER TRANSFORMATION
***************************************************
à Create a target table with name as EMP_DEPT with columns DNAME,
DEPT_WISE_SALARY using SqlPlus and import into Informatica as :
> CREATE TABLE EMP_DEPT
(
Dname Varchar2(30),
Dept_Wise_Salary Number(10,2)
);
à Create a mapping with the name of M_SQ as:
click on Ok.à drag and drop sources EMP and DEPT and target E MP_DEPT tables
into the mapping window workspace as:
à delete the above two default coming Source Qualifier tables then mapping window as:
à go to Transformations menuà click on Createà select transformation type as
Source Qualifier and give name as Emp_Dept as below:
à click on Create then one small window will open and it asks us to select the tables for
which we are going to create Source Qualifier transformation. Here we are created for
both the tables., so I m selecting EMP and D EPT as below:
à click on Okà click on Doneà automatically the newly created Source Qualifier
Transformations ports are mapped with the corresponding ports from both EMP and
DEPT tables as:
à now connect DNAME and SAL from source qualifier transformation to target table
as :
à double click on Source Qualifierà click on Properties tab as:
à click on SQL Q uery attribute down arrow mark which is marked by Red color as
above, new SQL editor window will openà in that select
ODB C data source as RRITEC_SOURCE
Username as: SOURCE_USER
Password as: SOURCE_USER
As below :
Click on Generate SQLà then automatically Query will be generated the workspace of
SQL window shown as below:
à click on Okà click on Okà the remaining procedure to create Session and
Workflow is same here also.
UNION TRANSFORMATION
*********************************
à Connect to source database( SOURCE_USER ) using Sql Plus and create two tables
as below:
> CREATE TABLE emp1020 AS SELECT deptno, sal FROM scott.EMP
WHERE deptno IN( 10,20 );
> CREATE TABLE emp2030 AS SELECT deptno, sal FROM scott.EMP
WHERE deptno IN ( 20,30 );
à connect to target databse( TARGET_USER ) using Sql Plus and create a table as:
> CREATE TABLE emp_Union102030 AS SELECT deptno, sal FROM scott.EMP
WHERE 1 = 2;
à now, import the above two sources and one target tables into Informatica designer.
à create a mapping with the name of m_UNION as:
Click on Okà drag and drop two sources and a target tables into mapping window as:
à goto transformations menuà click on createà select type as UNION and name it as
t_UNION as:
à click on Createà click on Doneà under transformation we get two new options
differently as shown below:
à double click on Union Transformationà click on Groups tab as below:
à click on Add a new group icon twice which is marked by Red colorà give names
as below:
à click on Group Ports tab
à click on Add a new port icon twice and name those two ports as below:
à click on Okà now the mapping window will look as:
à connect all ports from SQ_EMP1020 with the emp1020 group of Union
Transformation as:
à similarly, connect all the ports from SQ_EMP2030 with the emp2030 group of Union
Transformation as below:
à now, connect all the output ports from Union Transformation to target table as below:
STORED PROCEDURE TRANSFORMATION
*************************************************
à Using Sql Plus, create a Target table with columns EMPNO, SAL, TAX and import
that table into Informatica Designer tool as :
CREATE TABLE emp_sproc
( EMPNO number(5),
SAL number(10,2),
TAX number(10,2)
);
à using Sql Plus, connect to source database (SOURCE_USER ) , create a Stored
Procedure as:
> CREATE OR REPLACE PROCEDURE emp_tax ( Sal IN number,
Tax OUT number )
IS
BEGIN
tax := sal * 0.1;
END;
à create a mapping with the nameof m_SPROC as:
à click on Okà drag and dropsource EMP table and target emp_sproc into the
mapping window as:
à goto Transformations menuà click on Import Stored Procedure optionà new
window will open , in that provide required details to import the stored procedure as:
à click on Okà from the Source Qualifier transformation, connect SAL to stored
procedure SAL port and connect TAX port of Stored Procedure to TAX port of target
table as below:
à connect EMPNO and SAL from source qualifier transformation to corresponding
ports in Target table as below:
à creating Task and creating Workflow and providing proper connections to Source and
Target at session is same as here also.
Create Mapping using Un-Connected Stored Procedure Transformation
***********************************************************************
à create a Mapping with the name of m_SP_UnConnected as:
à click on Okà drag and drop source and target tables into the mapping window
workspace asbelow :
à goto Transformations menuà click on createà select type as Expression and
name it as t_SP_UnConnected as:
à click on Createà click on Doneà drag and EMPNO and SAL from Source
Qualifier transforamation toExpression Transformation as :
à goto Transformationsà click on Import Stored Procedure optionà provide
required detailsà select our Stored Procedureà click on Ok.
à double click on Expression Transformationà click on Ports tab as:
à click on Add a new port to this transformation icon which is marked by Red color in
above window and name it as TAXà disable the Input port as:
à click on Tax expression down arrow mark which is circled by red color , Expression
Editor window will open and type the Formula as below:
Note :in the above, PROC_RESULT is the keyword.
à click on Validateà click on Okà click on Okà connect all the ports from
Expression transformation to Target table as:
à creating Session and creating workflow is same here also.
TRANSACTION CONTROL TRANSFORMATION
*****************************************************
à It is Active and Connected type of transformation
à A transaction is the set of rows bound by commit or roll back rows. You can define a
transaction based on a varying number of input rows.
à In PowerCenter, you define transaction control at the following levels:
Within a Mapping : Within a mapping, you use the Transaction Control transformation
to define a transaction. You define transactions using an expression in a Transaction
Control transformation. Based on the return value of the expression, you can choose to
commit, roll back, or continue without any transaction changes.
Within a Session :When you configure a session, you configure it for user-defined
commit. You can choose to commit or roll back a transaction if the Integration Service
fails to transform or write any row to the target.
à When you run the session, the Integration Service evaluates the expression for each
row that enters the transformation. When it evaluates a commit row, it commits all rows
in the transaction to the target or targets. When the Integration Service evaluates a roll
back row, it rolls back all rows in the transaction from the target or targets.
à in the Transaction Control Transformation, we have 5 built-in variables.
1) TC_CONTINUE_TRANSACTION : The Integration Service does not perform any
transaction change for this row. This is the default value of the expression.
2) TC_COMMIT_BEFORE : The Integration Service commits the transaction, begins a
new transaction, and writes the current row to the target. The current row is in the new
transaction.
3) TC_COMMIT_AFTER : The Integration Service writes the current row to the target,
commits the transaction, and begins a new transaction. The current row is in the
committed transaction.
4) TC_ROLLBACK_BEFORE : The Integration Service rolls back the current
transaction, begins a new transaction, and writes the current row to the target. The current
row is in the new transaction.
5) TC_ROLLBACK_AFTER : The Integration Service writes the current row to the
target, rolls back the transaction, and begins a new transaction. The current row is in the
rolled back transaction.
Note : If the transaction control expression evaluates to a value other than commit, roll
back, or continue, the Integration Service fails the session.
à Create a Target table of name EMP_TCL with the columns EMPNO, ENAME, JOB,
SAL and import into Informatica tool.
> CREATE TABLE EMP_TCL
( EMPNO number(5),
ENAME varchar2(20),
JOB varchar2(2),
SAL number(10,2)
) ;
à goto mappings menuà click on Createà give name as m_TCL as :
à click on Okà drag and drop source EMP table and target EMP_TCL table into
mapping window workspace as:
à goto transformations menuà click on Createà select type as Transaction Control
and give name as t_TC as:
à click on Create à click on Doneà drag and drop EMPNO, ENAME, JOB and SAL
from SQ_EMP to EMP_TCL transformation table as :
à double click on Transaction Control Transformationà click on Properties tab as:
à click on Transaction Control Condition value dropdow n button, which is circled by
red color the Expression Editor will openà in that give condition as below:
à click on Validateà click on Okà click on Okà connect all the Ports from
Transactional Control Transformation to Target table as:
à Save the mappingà creating Session and creating the Workflow is same here also.
Advanced Training of Informatica
Mapping Parameters and Variables
*************************************
PARAMETERS :
à A mapping Parameter represents a constant value that we can define before running a
session.
à A mapping parameter retains the same value throughout the entire session.
Intial and Default value :
When w e declare a mapping parameter or variable in a mapping or a mapplet, we can
enter an initial value. When the Integration Service needs an initial value, and we did not
declare an initial value for the parameter or variable, the Integration Service uses a
default value based on the data type of the parameter or variable.
Data Default Value
-------- -----------------
Numeric 0
String Empty String
Date Time 1/1/1
à create a target table with the columns EMPNO, ENAME, JOB, SAL, COMM,
DEPTNO and import into Informatica as :
> CREATE TABLE emp_par_var
( EMPNO number(5),
ENAME varchar2(20),
JOB varchar2(20),
SAL number(10,2),
COMM number(10,2)
) ;
à create a mapping with the name of m_MP as:
Click on Okà drag and drop Source EMP table and target EMP_PAR_VAR table into
mapping window workspace as:
à goto Mappings menu à click on Parameters and Varaibles optionà new window
will open as:
à click on Add a new variable to this table à name the new field as $$DEPTNO à
set the type as Parameters from drodpwnà set IsExpVar to FALSE as below:
à click on Okà drop Filter Transformation into mapping workspaceà drag and drop
EMPNO, ENAME, JOB, SAL, COMM from source qualifier transformation to Filter
transformation as:
à double click on Filter transformationà click on Properties tab
à click on Filter Condition value drop downarrow mark, which is marked by red
color then Expression Editor window will open as:
à type DEPTNO= in the formula space, click on Variables tabà expand Mapping
parameters tabà under that we will get our created Parameters. here, ie., $$DEPTNO.
à double click on $$DEPTNO from the leftside navigator window, the formual window
will now get as:
à click on Validateà click on Okà click on Okà click on Okà drag and drop
corresponding columns from Filter transformation to Target table as:
à save the Mappingà creating the Session and creating Workflow is same here also.
Creating Parameter File : navigate to below path as :
F :  Informatica  PowerCenter 8.6.0 Server  Infa_Shared  BWparam
à right click on free spaceà select New à click on Text Documentà name it as
Deptno.prm à double click on that file and type as:
[RRITEC .S_MP ]
$$DEPTNO = 20
Save the file and close it( RRITEC --- > folder name; S_MP --- > session name)
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations
130297267 transformations

Contenu connexe

Tendances

Tutorial on how to load images in crystal reports dynamically using visual ba...
Tutorial on how to load images in crystal reports dynamically using visual ba...Tutorial on how to load images in crystal reports dynamically using visual ba...
Tutorial on how to load images in crystal reports dynamically using visual ba...Aeric Poon
 
Murach: How to use Entity Framework EF Core
Murach: How to use Entity Framework EF  CoreMurach: How to use Entity Framework EF  Core
Murach: How to use Entity Framework EF CoreMahmoudOHassouna
 
Visual basics Express Project
Visual basics Express ProjectVisual basics Express Project
Visual basics Express ProjectIftikhar Ahmed
 
Visual basic 6 black book
Visual basic 6 black bookVisual basic 6 black book
Visual basic 6 black bookAjay Goyal
 
Responsive Design and Bootstrap
Responsive Design  and BootstrapResponsive Design  and Bootstrap
Responsive Design and BootstrapMahmoudOHassouna
 
Php and web forms
Php and web formsPhp and web forms
Php and web formssana mateen
 
Visual basic ppt for tutorials computer
Visual basic ppt for tutorials computerVisual basic ppt for tutorials computer
Visual basic ppt for tutorials computersimran153
 
Introduction to programming using Visual Basic 6
Introduction to programming using Visual Basic 6Introduction to programming using Visual Basic 6
Introduction to programming using Visual Basic 6Jeanie Arnoco
 
( 13 ) Office 2007 Coding With Excel And Excel Services
( 13 ) Office 2007   Coding With Excel And Excel Services( 13 ) Office 2007   Coding With Excel And Excel Services
( 13 ) Office 2007 Coding With Excel And Excel ServicesLiquidHub
 
Visual Basic Programming
Visual Basic ProgrammingVisual Basic Programming
Visual Basic ProgrammingOsama Yaseen
 
Membuat aplikasi penjualan buku sederhana
Membuat aplikasi penjualan buku sederhanaMembuat aplikasi penjualan buku sederhana
Membuat aplikasi penjualan buku sederhanaYusman Kurniadi
 

Tendances (19)

Tutorial on how to load images in crystal reports dynamically using visual ba...
Tutorial on how to load images in crystal reports dynamically using visual ba...Tutorial on how to load images in crystal reports dynamically using visual ba...
Tutorial on how to load images in crystal reports dynamically using visual ba...
 
Vb tutorial
Vb tutorialVb tutorial
Vb tutorial
 
Murach: How to use Entity Framework EF Core
Murach: How to use Entity Framework EF  CoreMurach: How to use Entity Framework EF  Core
Murach: How to use Entity Framework EF Core
 
As pnet
As pnetAs pnet
As pnet
 
Visual basics Express Project
Visual basics Express ProjectVisual basics Express Project
Visual basics Express Project
 
Create our own keylogger
Create our own keyloggerCreate our own keylogger
Create our own keylogger
 
Visual basic 6 black book
Visual basic 6 black bookVisual basic 6 black book
Visual basic 6 black book
 
Vb introduction.
Vb introduction.Vb introduction.
Vb introduction.
 
Visual programming
Visual programmingVisual programming
Visual programming
 
Responsive Design and Bootstrap
Responsive Design  and BootstrapResponsive Design  and Bootstrap
Responsive Design and Bootstrap
 
Php and web forms
Php and web formsPhp and web forms
Php and web forms
 
Visual basic ppt for tutorials computer
Visual basic ppt for tutorials computerVisual basic ppt for tutorials computer
Visual basic ppt for tutorials computer
 
Introduction to programming using Visual Basic 6
Introduction to programming using Visual Basic 6Introduction to programming using Visual Basic 6
Introduction to programming using Visual Basic 6
 
Meaning Of VB
Meaning Of VBMeaning Of VB
Meaning Of VB
 
( 13 ) Office 2007 Coding With Excel And Excel Services
( 13 ) Office 2007   Coding With Excel And Excel Services( 13 ) Office 2007   Coding With Excel And Excel Services
( 13 ) Office 2007 Coding With Excel And Excel Services
 
Controls events
Controls eventsControls events
Controls events
 
Visual Basic Programming
Visual Basic ProgrammingVisual Basic Programming
Visual Basic Programming
 
Membuat aplikasi penjualan buku sederhana
Membuat aplikasi penjualan buku sederhanaMembuat aplikasi penjualan buku sederhana
Membuat aplikasi penjualan buku sederhana
 
Project1 VB
Project1 VBProject1 VB
Project1 VB
 

Similaire à 130297267 transformations

Generic steps in informatica
Generic steps in informaticaGeneric steps in informatica
Generic steps in informaticaBhuvana Priya
 
Informatica expression transformation
Informatica expression transformationInformatica expression transformation
Informatica expression transformationsirmanohar
 
Informatica power center
Informatica power centerInformatica power center
Informatica power centerSouravSRT
 
Change transport system in SAP
Change transport system in SAP Change transport system in SAP
Change transport system in SAP chinu141
 
Lc solutions sop manual2
Lc solutions sop manual2Lc solutions sop manual2
Lc solutions sop manual2RAVI KANT
 
Lsmw for master data upload simple explanation
Lsmw for master data upload simple explanationLsmw for master data upload simple explanation
Lsmw for master data upload simple explanationManoj Kumar
 
Informatica complex transformation ii
Informatica complex transformation iiInformatica complex transformation ii
Informatica complex transformation iiAmit Sharma
 
Data Warehousing (Practical Questions Paper) [CBSGS - 75:25 Pattern] {2015 Ma...
Data Warehousing (Practical Questions Paper) [CBSGS - 75:25 Pattern] {2015 Ma...Data Warehousing (Practical Questions Paper) [CBSGS - 75:25 Pattern] {2015 Ma...
Data Warehousing (Practical Questions Paper) [CBSGS - 75:25 Pattern] {2015 Ma...Mumbai B.Sc.IT Study
 
Informatica complex transformation i
Informatica complex transformation iInformatica complex transformation i
Informatica complex transformation iAmit Sharma
 
Lsmw (Legacy System Migration Workbench)
Lsmw (Legacy System Migration Workbench)Lsmw (Legacy System Migration Workbench)
Lsmw (Legacy System Migration Workbench)Leila Morteza
 
Ad basic tech_workshop
Ad basic tech_workshopAd basic tech_workshop
Ad basic tech_workshopmanisherp084
 
Access tips access and sql part 6 dynamic reports
Access tips  access and sql part 6  dynamic reportsAccess tips  access and sql part 6  dynamic reports
Access tips access and sql part 6 dynamic reportsquest2900
 
SAP ABAP lsmw beginner lerning tutorial.pdf
SAP ABAP lsmw beginner lerning tutorial.pdfSAP ABAP lsmw beginner lerning tutorial.pdf
SAP ABAP lsmw beginner lerning tutorial.pdfPhani Pavan
 
SAP ABAP lsmw beginner lerning tutorial.pdf
SAP ABAP lsmw beginner lerning tutorial.pdfSAP ABAP lsmw beginner lerning tutorial.pdf
SAP ABAP lsmw beginner lerning tutorial.pdfPhani Pavan
 
Uploading customer master extended address using bapi method
Uploading customer master extended address using bapi methodUploading customer master extended address using bapi method
Uploading customer master extended address using bapi methodlondonchris1970
 
Cis 407 i lab 1 of 7
Cis 407 i lab 1 of 7Cis 407 i lab 1 of 7
Cis 407 i lab 1 of 7helpido9
 
Installing Process Oracle 10g Database Software on Windows 10
Installing Process Oracle 10g Database Software on Windows 10Installing Process Oracle 10g Database Software on Windows 10
Installing Process Oracle 10g Database Software on Windows 10Azharul Islam Shopon
 

Similaire à 130297267 transformations (20)

Generic steps in informatica
Generic steps in informaticaGeneric steps in informatica
Generic steps in informatica
 
Informatica expression transformation
Informatica expression transformationInformatica expression transformation
Informatica expression transformation
 
Informatica power center
Informatica power centerInformatica power center
Informatica power center
 
Change transport system in SAP
Change transport system in SAP Change transport system in SAP
Change transport system in SAP
 
Lc solutions sop manual2
Lc solutions sop manual2Lc solutions sop manual2
Lc solutions sop manual2
 
Lsmw for master data upload simple explanation
Lsmw for master data upload simple explanationLsmw for master data upload simple explanation
Lsmw for master data upload simple explanation
 
Informatica complex transformation ii
Informatica complex transformation iiInformatica complex transformation ii
Informatica complex transformation ii
 
Data Warehousing (Practical Questions Paper) [CBSGS - 75:25 Pattern] {2015 Ma...
Data Warehousing (Practical Questions Paper) [CBSGS - 75:25 Pattern] {2015 Ma...Data Warehousing (Practical Questions Paper) [CBSGS - 75:25 Pattern] {2015 Ma...
Data Warehousing (Practical Questions Paper) [CBSGS - 75:25 Pattern] {2015 Ma...
 
Informatica complex transformation i
Informatica complex transformation iInformatica complex transformation i
Informatica complex transformation i
 
User Guide
User GuideUser Guide
User Guide
 
Lsmw (Legacy System Migration Workbench)
Lsmw (Legacy System Migration Workbench)Lsmw (Legacy System Migration Workbench)
Lsmw (Legacy System Migration Workbench)
 
Ad basic tech_workshop
Ad basic tech_workshopAd basic tech_workshop
Ad basic tech_workshop
 
Access tips access and sql part 6 dynamic reports
Access tips  access and sql part 6  dynamic reportsAccess tips  access and sql part 6  dynamic reports
Access tips access and sql part 6 dynamic reports
 
Soa8
Soa8Soa8
Soa8
 
SAP ABAP lsmw beginner lerning tutorial.pdf
SAP ABAP lsmw beginner lerning tutorial.pdfSAP ABAP lsmw beginner lerning tutorial.pdf
SAP ABAP lsmw beginner lerning tutorial.pdf
 
SAP ABAP lsmw beginner lerning tutorial.pdf
SAP ABAP lsmw beginner lerning tutorial.pdfSAP ABAP lsmw beginner lerning tutorial.pdf
SAP ABAP lsmw beginner lerning tutorial.pdf
 
Uploading customer master extended address using bapi method
Uploading customer master extended address using bapi methodUploading customer master extended address using bapi method
Uploading customer master extended address using bapi method
 
Mca 504 dotnet_unit5
Mca 504 dotnet_unit5Mca 504 dotnet_unit5
Mca 504 dotnet_unit5
 
Cis 407 i lab 1 of 7
Cis 407 i lab 1 of 7Cis 407 i lab 1 of 7
Cis 407 i lab 1 of 7
 
Installing Process Oracle 10g Database Software on Windows 10
Installing Process Oracle 10g Database Software on Windows 10Installing Process Oracle 10g Database Software on Windows 10
Installing Process Oracle 10g Database Software on Windows 10
 

Dernier

What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 

Dernier (20)

What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 

130297267 transformations

  • 1. INFORMATICA ************************ Informatica is a GUI(Graphical User Interface) based tool and it is useful to do ETL work. The famous versions of Informatica are : 5, 6, 7.1, 8.6, 9 Architecture of Informatica : Different Components of Informatica : 1) PowerCenter Administration Console 2) PowerCenter Repository Manager 3) PowerCenter Designer 4) PowerCenter Workflow Manager 5) PowerCenter Workflow Monitor Different Roles of Informatica : In any Informatica project, there mainly two roles : Developer and Administrator à Developer will create Mappings and Workflow s by using Designer and Workflow Manager respectively. à Administrator is responsible to do all Admin activities, like, creating Users, Groups, Providing permissions and running the Workflows.
  • 2. * List of Transformations * Following are the list of Transformations available in Power Designer tool: 1) Aggregator Transformation 2) Source Qualifier Transformation 2) Expression Transformation 3) Filter Transformation 4) Joiner Transformation 5) Lookup Transformation 6) Normalizer Transformation 7) Rank Transformation 8) Router Transformation 9) Sequence Generator Transformation 10) S tored Procedure Transformation 11) Sorter Transformation 12) Update Strategy Transformation 13) XML Source Qualifier Transformation 16) Union Transformation. 17) Transaction Control Transformation * POWERCENTER DESIGNER * à It is Windows based Client and it is useful to create mappings. à To do any work with PowerCenter Designer window, we go through the following steps : 1) Creating ODBC connection to the Source Database 2) Importing Source Metadata. 3) Creating ODBC connection to the Target Database. 4) Importing Target Metadata. 5) Creating Mapping 6) Creating Transformations. Step 1 : Creating ODBC connection to the Source Database : Goto Startà click on Control Panelà Click on Administrative Toolsà click on DataSources(ODBC) option then one window will open in that click on System DSN tab and fig as below :
  • 3. Click on Add button, then another window will open , in that select DataSource as Oracle in OraDb10g_home1(2) option as below: à Click on Finish buttonà One window will open as: In that provide Data Source Name as : RRITEC_SOURCE TNS Service Name as : ORCL User ID as : SOURCE_USER
  • 4. Click on Test Connection button à Click on Ok. Then Connection Successful message window will get as : Click on Okà click on Ok.Now, the ODBC connection creation is successful for the Source Database. Step 2 : Importing Source Metadata : Go to Startà click on RUNà type serives.m sc in that run window as below: Click on Okà then one window will open it shows all the servicesà in thatselect Informatica Services 8.6.0à click on start, if it is already start in mode, then click on Restart link as below :
  • 5. à go to Startà All Programsà Informatica Poercenter 8.6.0à client à Powercenter Designer à click on Okà then Designer window will open. à right click on our created Repository ie.,rep_05 à click on Connectà one window will open and it asks the Username and password.Provide Username and Password as Administrator as shown below window : à click on Connect. à right click on our folder ie., RRITEC folderà click on Open. à click on Source Analyzer icon from the tool bar asmarked by Red circle. à go to Sources menu à click on Import from Database option, then one window will open à in that provide ODBC datasource as RRITEC_SOURCE, Username and Password as SOURCE_USER as shown below:
  • 6. à Click on Connectà under Select Tables sub window, which is marked by B rown colored window, select EMP and DEPT tablesà click on Okà go to Repository menu à click on Save. Step -3 : Creating ODBC connection to Target Database: à Goto SqlPlus toolà type as / as sysdba à click on login. à after opening the Sql Pluswindow, type as > CREATE USER TARGET_USER Identified By TARGET_USER; > GRANT DBA TO TARGET_USER; à connect or login into Sql Plus using above UserName and Password as: CONN TARGET_USER @ ORCL---- > press enter Password : TARGET_USER; Ø Select * From Tab; Ø Create Table T_EMP as Select * From Scott.Emp Where 1=2 ; à go to Startà click on Control Panelà Click on Administrative Toolsà Click on Data Sources(ODBC)à then open one window as below :
  • 7. Click on Add button à then one window will Open and select Oracle in OraDb10g_home2 option as shown in below : à click on Finish, then new window will openà in that provide Data Source Name as : RRITEC_TARGET TNS Service Name as : ORCL User ID as : TARGET_USER
  • 8. à click on Test Connectionà one small window will open as : provide Service Name : ORCL User Name : TARGET_USER Password : TARGET_USER à click on Ok. Step -4 :Importing Target Metadata : à go to Informatica Designer toolà click on Target Designer icon as : à go to targets menuà click on Import From Databaseà then new window will open à in that provide ODBC data source as : RRITEC_TARGET Username as : TARGET_USER Password as : TARGET_USER
  • 9. à Click on Connectà then new window will openà in that expand TARGET_USER schema which is marked by Brown color as below : à expand TABLESà select the Target Table and click on Okà go to Repository menu and click on Save.
  • 10. Step 5 :Creating Mapping : à in the Designer toolà click on Mapping Designer iconà go to Mappings menuà click on Createà give the name as m_First_Mapping as shown below : à click on Ok. à from the Repository navigator, expand RRITEC folderà expand Sources tabà Drag and drop EMP table into Workspace of mapping windowà expand targets tabà drag and drop T_EMP table besides to Source Qualifiertable (SQ_EMP) in workspaceà map the corresponding ports from SQ_EMP to target table T_EMP à go to repository menu à click on Save. * POWERCENTER WORKFLOW MANAGER * à It is Useful to CreateTasks andWorkflows. à In Creation of Workflow, we have below steps : 1) Creating Workflow 2) Creating Connection 3) Creating Task 4) Linking Task and Workflow 5) Running Workflow 1) Creating Workflow : Start à All Programsà Informatica Powercenter 8.6.0à Client à click on PowerCenter Workflow Managerà then window will look like as
  • 11. à right click on our Repository ie., rep_05 à click on Connectà new window will open and provide Username and Password as Administrator. à click on connectà Right click on RRITEC folderà click on O pen à click on Workflow Designer icon as shown in below : à go to Workflows menuà click on Createà name it as First_Workflow :
  • 12. à click on Okà then workflow workspace will look as below : à Goto Repository menuà click on Save. 2) Creating Connection : à For SOURCE : In the Connections menuà click on Relational optionà new window will open as : à select Type as ALLà click on New buttonà new window will open and select the Oracle from list of available options as shown below :
  • 13. à click on Okà then new window will open à in that Provide name as SOURCE , Username as : SOURCE_USER Password as : SOURCE_USER Connection String as : ORCL as shown below : Click on Ok button à Similarly, create one more connection for Target as same procedure: the process is same as above , but here we need to provide the details as below : Provide name as SOURCE , Username as : SOURCE_USER Password as : SOURCE_USER Connection String as : ORCL as shown below :
  • 14. à click on Ok. 3) Creating Task : à In the Workflow Manager windowà goto Tasks menuà click on Createà new window will open, in that select the type as Session and name as : First Session as shown below : à click on Createà click on Doneà then new window will open as :
  • 15. From the above window, select our mapping ie., First_Mapping from the list of all available mappings in our Repository ( but here it is not available, because I didn t save our mapping while preparing this document, that s why we didn t observe that.)à click on Okà then our workflow window will look like as : à Double click on First_Sessionà new window will open, in that click on Mapping tab, now the window will look as : à from the Navigator of above windowà select SQ_EMP under sources tabà the window will change as :
  • 16. à click on Down arrow mark , which is marked by Brown color, then another w indow will open as : In that we need to select our newly created SOURCE, because what we selected from Navigator in above, is also Sourceà click on Ok.
  • 17. à similarly, same as above.,select our target table ie; T_EMP and click on Down arrow mark, and select the TARGET and click on Ok. 4) Link Task and Workflow : à goto Tasks menuà click on Link Task optionà just click the workflow Start which is represented by Green Arrow and drag the cursor to First_Sessionà then the flow will look like as below : à goto Repository menuà click on Save. 5) Running the Workflow : à Right click on Workflow ie., Green colored Arrowà click on Start workflow from task. TRANSFORMATIONS ******************************** à In Informatica, Transformations help to transform the source data according to the requirements of target system and it ensure the quality of the data being loaded into target. Transformations are of two types: Active and Passive. Active Transformation : An active transformation can change the number of rows that pass through it from source to target i.e it eliminates rows that do not meet the condition in transformation. Passive Transformation : A passivetransformation does not change the number of rows that pass through it i.e it passes all rows through the transformation.
  • 18. à Transformations can be Connected or UnConnected. Connected Transformation : Connected transformation is connected to other transformations or directly to target table in the mapping. UnConnected Transformation : An unconnected transformation is not connected to other transformations in the mapping. It is called within another transformation, and returns a value to that transformation. FILTER TRANSFORMATION ************************************* à It isActive and Connected type Transformation à A filter condition returns TRUE or FALSE for each row, that the Integration Service evaluates, depending on whether a row meets the specified condition. For each row that returns TRUE, the Integration Services pass through the transformation. For each row that returns FALSE, the Integration Service drops and writes a message to the session log. à For any task we will going to do, its good practice to draw the one diagram firstly, to get clear idea about task, that is DFD (Data Flow Diagram ) or MFD (Mapping Flow Diagram ). So, here also, we are going to draw the sample Data Flow Diagram. Step-1, 2 and 3 are same here also. SoI m not giving here, you can do it ! Step-4 :Creating and Importing Target Table Metadata : As we keep in mind that, if we want to import any table structure into the designer window, to place the target data, first that table structure is to be there in the Database. If it not there, then Create it first using below procedure. Go to Sql Plus, give username asTAREGT _USER , password asTAREGT_USER , and Host string as ORCL,then click on Ok. After opening the SqlPlus window, type the below syntax to create the table definition as:
  • 19. CREATE TABLE F_EMP ( Empno Number(10), Ename Varchar2(20), Job Varchar2(50), Sal Number(10,2), Comm Number(10,2), Deptno Number(10) ) ; Now, open the Informatica Designer window, right click on Repository( here rep_05) from the Navigator windowà click on C onnect à then small window will open and it asks Username and Password as: à provide Username and Password as Administratorà click onConnect then our created Folder is Opened where we need to Store all our work and expand that Folder and see the list of options available in that Folder as below:
  • 20. Now, w e need to import that Target table into the Designer as per below steps: à Click on Warehouse Designer ( Target Designer) icon from the top of Workspace window which is rounded by Red Colored Circle in below diagram as: Now go to Targets menuà click on Import From Database option, then new window will open as below:
  • 21. From that above window, Select ODBC data source as: RRITEC_TARGET, UserName as: Target_User Passowrd as : Target_User à Click on Connect and observe Under the Select Tables sub window which is rounded by Red colored circle, expand that user, you will find list of available tables under that user. Select the required table, here F_EMP table and click on OK, then that table metadata is came into Designer window under Target Designer workspace as shown below :
  • 22. Step 5 : Creating Mapping : Got to Designer tool, click on Mapping Icon which is indicated by Red colored Circle as shown below : Go to Mappings menu, click on Create option, then one small window will open as : Give the mapping name as anything, here I m giving as m_FILTER . Click on Ok. In Repository under Navigator window, expand RRITEC folderà expand Sources folder à expand Source_User tab, there we will get our source table. So, drag and drop that source table EMP into the Workspace window , after dragging done, the designer window is like below :
  • 23. Whenever, if we drag Source table into the Workspace, one another table is also came by default with the name as SQ_<source_table_name> . here SQ_EMP. Where SQ stands for Source Qualifier. Now, to create Transformation, go to Transformation tabà click on Create, one window will open and it asks two values. Select Transformation type as Filter and Name of the transformation is: t_SAL _GREATERTHAN_3000, as shown below: Click on Createà click on Done. Now the designer window workspace will look like as:
  • 24. Now, we need to give filter condition on this newly created Filter Transformation table. Now select the required fields inSource Qualifier and drag and drop those fields from Source Qualifier Table into Filter Transformation Table as below : à Now, we need to gove the condition for the filter Tranformation table,For that click on Header of that Filter Transformation table, then it will open one new window that is Edit Transformation in that select Properties Tab and the window will look like as:
  • 25. à Click on Filter Condition attribute Down arrow mark , which is rounded by Red colored Circle as shown in above diagram,then Expression Editor window will open as below : à In that Expression Editor window, select Ports tabà double click on SALà from the keyboard type >= 3000, click on Validate button , then it shows validation success message, if the syntax and everything is proper as : click on Okà click on Okà again Ok. àNow, expand Target_User tab, there we will get our Target table. So, drag and drop that Target table F_EMP into the Workspace window , after dragging done, the designer window is like below :
  • 26. à Now, map the required fields from Transformation Filter table into Target table as below: à Go to Repository tabà click on Save, and observe the Saved message that is either Valid or Invalid the mapping message, we will gotunder Output window in the same Designer Window tool. à If want to check that whether the records are to be loaded into the target table after execution of our workflow, go to SqlPlus, querying the table as: > SELECT * FROM F_EMP; We get the records only those records will meet the Condition as we specified in the filter Transformation. Filtering Rows with NULL values : To filter rows containing null values or spaces, use theISNULL and IS_SPACES functions to test the value of the port. For Eg, if you wantto filter out rows that contain NULL value in theFIRST_NAME port, use the following condition: IIF ( ISNULL ( FIRST_NAME ) , FALSE , TRUE ) This condition states that if the FIRST_NAME port is NULL, the return value is FALSE and the row should be discarded. Otherwise, the row passes through to the next transformation. Limitations of FilterTransformation: i) Using Filter Transformation, we can load only one Target Table at a time. ii) If we have five different conditions,then we need to create five different Filter Transformations.
  • 27. iii) W e cannot capture the False or Unsatisfied records of the condition in an table. NOTE : 1) Use the Filter transformation early in the mapping : To maximize session perform ance, keep the Filter transformation as close as possible to the sources in the mapping. Rather than passing rows that you plan to discard through the mapping, you can filter out unwanted data early in the flow of data from sources to targets. 2) Use theSource Qualifier transformation to filter: The Source Qualifier transformation provides an alternate way to filter rows. Rather than filtering rows from within a mapping, the Source Qualifier transformation filters rows when read from a source. The main difference is that the source qualifier limitsthe row set extracted from a source, while the Filter transformation limitsthe row set sent to a target. Since a source qualifier reduces the number of rows used throughout the m apping, it providesbetter performance. However, the Source Qualifier transformation only lets you filter rows from relational sources, while the Filter transformation filters rows from any type of source. Also, note that since it runs in the database, you must make sure that the filter condition in the Source Qualifier transformation only uses standard SQL. The Filter transformation can define a condition using any statement or transformation function that returns either a TRUE or FALSE value. EXPRESSION TRANSFORMATION ************************************* à It isPassive and Connected type of Transformation. à Use the Expression Transformation to performAggregate calculations. à It is best practice to place one Expression Transformation after Source Qualifier Transformation and one Expression Transformation before the Target table, because, in future, if we add any new column to the target which is independent to Source table, then we need to disturb all the mapping., instead of that, if we use above best practice, its no need to disturb all the mapping. Its enough to change at the Expression Transformation at before of the Target table. à Expression Transformation isuseful to derive the new columns from the Existing Columns.
  • 28. Mapping Flow Diagram: ( here the target table has fields of EmpNo, Ename, Job, Sal, Comm, Tax, Total Sal) à Step-1, 2, 3 is same here also, so you can do it. Step 4: Creating Target table and Import that metadata table into Designer tool : à open SqlPlus and type the below to create a table as : CREATE TABLE EXP_EMP ( EmpNo Number, Ename Varchar2(30), Job Varchar2(20), Sal Number(p,s), Comm Number(p,s), Tax Number(p,s), Total_Sal Number(p,s) ) ; à gotoTargets menuà click on Import from Database optionà new window will open, in that provide Target ODBC connection as :RRITEC_TARGET Username as : TARGET_USER Password as : TARGET_USER As below : SQ_EMP (source) SQ_EMP EXPRESSSION T / F à Tax=Sal * 0.1 à Total = Sal+IIF(IsNull(Comm),0,Comm) EXP_EMP (target)
  • 29. à click on Connectà the same above window will just look as differentà in that expand the Target_User tab and expand the Tables tab which are marked by Brown color and à select our target table ie., EXP_EMP table and clickon Ok. as below:
  • 30. Step 5: Creating Mapping : à click on Mapping Designer icon from the tool barà goto Mappings menuà click on Create option and give the mapping name as :m_EXP_EMP as shown below : à click on Ok.à now drag and drop Source and Target tables into mapping window workspace as below : à Goto Transformations menuà click on Createà new window will get, in that select type as Expression and give name as : T_Tax_And_TotSal as below : à click on Createà click on Doneà Drag and Drop required columns( EmpNo, Ename, Job, Sal, Comm) from SQ_EMP to T_Tax_And_TotSal transformation table as shown below :
  • 31. à Double click on header of Expression Transformationà new window will get, in that click on Ports Tabà window will look as : à select last port., ie., Commà click on two times on Add a New port icon which is marked by Brown color, to add new two ports as below :
  • 32. à rename those two new ports as one is Tax and another is TotSal and set both Datatypes as Decimalà Disable the Input ports for both fields as :
  • 33. à click on Expression icon, which is marked by brown color in aboveà new window will get, where you need to write the Formula for Tax column as : à click on Validate button, it will give the validation status message, if any sysntax errors is there or not as : à click on Okà Click on Okà similarly click on expression down arrow mark corresponding to TotalSal and type the below expression as:
  • 34. à click on validate to check for any syntax errors are there or not : à click on Okà Click on Okà Click on Okà now connect corresponding ports from Expression Transformation table to Target table as shown below :
  • 35. à The remaining process to create the Session and create the Workflow and running the Workflow from Task is same here also, so you can do that. Note : In the Session Properties window., make sure that Change the Load Type from default option ie., Bulk to Normal à after successfully ran the Workflowà goto target table and observethe data whether it is properly populated or not ! Performace Tuning : As we know that there are three types of Ports. Input, Output, and Variable. Variable --- > this Port is used to store any temporary Calculation. 1) Use Operators instead of Functions. 2) Minimize the usage of String Functions. 3) If we use the Complex Expression multiple times in the Expression Transformer, then make that expression as Variable. Then we need to use only this variable for all computations. RANK TRANSFORMATION ******************************* à It is Active and Connected type of transformation. à The Rank transformation differs from the transformation functions MAX and MIN, in that it lets you select a group of top or bottom values, not just one value. à you can designate only one Rank port in a rank Transformation. à The Designer creates a RANKINDEX port for each Rank transformation automatically. The Integration Service uses theRank Index port to store the ranking position for each row in a group. For example, if you create a Rank transformation that ranks the top five salespersons for each quarter, the rank index numbers the salespeople from 1 to 5: RANKINDEX SALES_PERSON SALE S 1 Sam 10,000 2 Mary 9,000 3 Alice 8,000 4 Ron 7,000 5 Alex 6,000
  • 36. à The RANKINDEX is an output port only. You can pass the rank index to another transformation in the mapping or directly to a target. à We Cannot Edit or Delete the RANKINDEX port. Rank Cache : there are two types of Caches here. They are Rank Data Cache and Rank Index Cache. Rank Data Cache: The index cache holds group information from the group by ports. If we are Using Group By on DEPTNO, then this cache stores values 10,20, 30 etc. All Group By Columns are in RANK INDEX CACHE. Ex. DEPTNO Rank Index Cache: It holds row data until the Power Center Server completes the ranking and is Generally larger than the index cache. To reduce the data cache size, connect Only the necessary input/output ports to subsequent transformations. Note : All Variable ports, if there, Rank Port, All ports going out from RANK Transformations are stored in RANK DATA CACHE. Mapping Flow Diagram : ( the target table R ank_Emp has fields of EmpNo, Ename, Sal, DeptNo, Rank) à Step-1, 2, 3 is same here also., so you can do and use it. Step-4 :Create and Import Target table as per above Mapping Flow Diagram using SqlPlus. Step-5: Creating Mapping : Goto designer toolà click on mapping iconà goto Mappings menuà click on Create à then new window will open and give the mapping name there as : Click on Okà Drag and drop Source EMP table into mapping window workspace as shown below : EMP (source) SQ_EMP RANK T / F (top 3 ranks) RANK_EMP (target)
  • 37. à Go to Transformations menu à click on Createà select type as RANK and name it as T_RANK as shown below : à click on Createà click on Doneà from Source Qualifier, Drag and Drop EmpNO, Sal, DeptNo into Rank Transformation table ie., T_RANK as shown below : à Double click on header of the Rank Transformation tableà new will open, in that click on Ports tabà enable the Checkbox of Rank ie., R[] of the SAL port because we are finding the Rank based on the Rank, which is marked by Brown color as shown in below :
  • 38. à click on Properties tab on that same windowà select the Top/Bottom attribute as value of Topà provide Number of Ranks option as 3 , the window as shown below :
  • 39. à click on Okà drag and drop Target table ie., RANK_EMP into workspaceà connect RANKINDEX of Rank Transformation table to Rank field in Target table and connect all the remaining ports from T_RANK to corresponding ports at Target table as shown below : à the remaining process for creating the Session and creating the Workflow and Running theworkflow from Task is same here also. à after successfully ran the Workflow, go to taget table and observe that Rank is correctly populated or not. ROUTER TRANSFORMATION ************************************ à It is anActive and Connected transformation. à Router Transformation is useful to load Multiple targets and handled Multiple Conditions. à It is similar to filter transformation. The only difference is, filter transformation drops the data that does not meet the condition whereas router has an option to capture the data that do not meet the condition. It is useful to test multiple conditions. It has input, output, and default groups. à If you need to test the same input data based on multiple conditions, use a Router transformation in a mapping instead of creating multiple Filter transformations to perform the same task. à Likewise, when you use a Router transformation in a mapping, the Integration Service processes the incoming data only once. When you use multiple Filter transformations in a mapping, the Integration Service processes the incoming data for each transformation. This will effect thePerformance.
  • 40. Configuring Router Transformation : The router transformation has input and output groups. You need to configure these groups. 1) Input groups : The designer copies the input ports properties to create a set of output ports for each output group. 2) Output groups : Router transformation has two output groups. They are user-defined groups and default group.This have again two types of Groups. They are User-defined Groups and Default Group. 2- i) User-defined groups : Create a user-defined group to test a condition based on the incoming data.Each user-defined group consists of output ports and a group filter condition. You can create or modify the user-defined groups on the groups tab. Create one user-defined group for each condition you want to specify. 2- ii) Default group : The Designer creates only one default group when you create one new user-defined group. You cannot edit or delete the default group. The default group does not have a group filter condition. If all the conditions evaluate to FALSE, the integration service passes the row to the default group. Specifying Group Filter Condition : Specify the Group filter condition on the groups tab using the expressioneditor. You can enter any expression that returns a single value. The group filter condition returns TRU E or FALSE for each row that passes through the transformation. Mapping Flow Diagram : ( Both Target tables has fields as EmpNo, Ename, Job, Sal, DeptNo) EMP (Source) SQ_EMP (Source Qualifier) ROUTER T / F R_SAL_UPTO_2500 ( Target-1 ) R_SAL_AFTER_2500 ( Target-2 )
  • 41. à Create two tables R_SAL_UPTO_2500 and R_SAL_AFTER_2500 from SqlPlus as : CREATE TABLE R_SAL_UPTO_2500 ( Eno Number(5), Ename Varchar2(10), Job Varchar2(10), Sal Number(6,2), DeptNo Number(10,2) ); CREATE TABLE R_SAL_AFTER_2500 ( Eno Number(5), Ename Varchar2(10), Job Varchar2(10), Sal Number(6,2), DeptNo Number(10,2) ); Creating Mapping: à Import Source and Target Tables into the Workspace window. Go to Mapping designer workspace à go to Mapping menuà click on Create, then below window will open and give the mapping name as m_ROUTER as below : Click on Okà drag and drop source and two target tables into the workspace window as and mapping window will look as below :
  • 42. Now, go to Transformation menuà click on Create, then new window will open as below: Click on Createà click on Doneà Copy and drag all the columns from Source Qualifier table to Router Transformation table as per below : Just keenly observe that, in the Router Transformation, we found that one INPUT, NEWGROUP1 and DEFAULT label we got,which is marked by Red rectangle as shown above. à double click on Header of the Router Transformationà one new window ie., Edit Transformations window will openà in that select Groups tab, which is marked by Red rectangle à click on Add a New Group to this Router icon, which is also rectangled by the Red color as shown below:
  • 43. After click on that Add a New Group to this Router icon, the window will look as below : On the NEWGROUP1 name, rename that with SAL_UPTO_2500à under Group Filter Condition Field, click on Down arrow mark, which is circled by the Red colorà new window will openthat is Expression Editoras below :
  • 44. Delete thatdefault TRUE from the workspaceà go to Ports Tabà double click on SAL field à type from the keyboard as <= 2500 à click on Okà click on Okà click on Ok. Now connect from NEWGROUP1 section fields to one target table and DEFAULT1 section fields to another Target table as shown in below: The remaining process for creating Task and creating workflow is same here also.
  • 45. After running the workflow and it is succeeded, go to database and give thequery test the data of above two target tables, one table have to store all the details of employees whose salaries Upto 2500 and the other table have to store all the details of employees whose salaries are After/ Above 2500. Advantages of Using Router over Filter Transformation : Use router transformation to test multiple conditions on the same input data. If you use more than one filter transformation, the integration service needs to process the input for each filter transformation. In case of router transformation, the integration service processes the input data only once and therebyimproving the performance. SORTER TRANSFORMATION ************************************** à It isActive and Connected type of transformation. à It is useful to sort the dataaccording to the sort key specified, either in Ascending or Descending order based on requirement. à You can specify multiple columns as sort keys and the order (Ascending or Descending for each column) Properties :- Distinct Output Rows - Can output distinct records if you check this option, suppose If we enable this option, then obviously it will not return all the Rows as there in Input.So in this case, this Sorter Transformation acts as an Active Transformation. Null Treated Low--- You can configure the way the Sorter transformation treats null values. Enable this property if you want the Integration Service to treat null values as lower than any other value when it performs the sort operation. Disable this option if you want the Integration Service to treat null values as higher than any other value. Case Sensitive --- The Case Sensitive property determines whether the Integration Service considers case when sorting data. When you enable the Case Sensitive property, the Integration Service sorts uppercase characters higher than lowercase characters. Work Directory --- You must specify a work directory the Integration Service uses to create temporary files while it sorts data. After the Integration Service sorts the data, it deletes the temporary files. You can specify any directory on the Integration Service machine to use as a work directory. By default, the Integration Service uses the value specified for the $PMTempDir process variable.
  • 46. Sorter Cache Size: - It is the max amount of memory used for sorter. In version 8 it is set to Auto. In version v7 it is 8MB If it cannot allocate then the session will fail If it needs more memory that it will pages the data in the W ork Directory, and writes a warning in the log file. Better Performance : Sort the data using sorter transformation before passing in to aggregator or joiner transformation. As the data is sorted, the integration service uses the memory to do aggregate and join operations and does not use cache files to process the data. Mapping Flow Diagram : ( Here, Target table has fields that whatever those Fields available for Source table ) à Create target table based on mapping flow diagram with name S_EMP as below : CREATE TABLE S_EMP AS SELECT * FROM EMP WHERE 1=2 ; Which is nothing the same structure that what we had in Source table ie., EMP Creating m apping : in the mapping designer, go to mappings menuà click on create, then small window will open as below : Click on Okà drag and drop sourceand target tables into the Workspace windowà go to Transformation menuà click on create , then small window will open as : EMP (source) SQ_EMP SORTER T / F (based onDeptNo) S_EMP (target)
  • 47. Click on Createà click on Done, then mapping designer window will look as : Now select and drop all the columns from Source Qualifier Transformation table to Sorter Transformation Table , the mapping is like below : à Now, double click on header of the Sorter transformationà click on Ports tabà Enable the Check box under Key field corresponding to the DEPTNO , which is marked by Red colored rectangle as shown below:
  • 48. Click on Okà click on Save from Repository tab. Next, creating the Session and Creating the Workflow is same for this also. After running the workflow, go to backend and check the data. Performance Tuning : 1) While using the sorter transformation, configure sorter cache size to be larger than the input data size. 2) Configure the sorter cache size setting to be larger than the input data size while Using sorter transformation. 3) At the sorter transformation, use hash auto keys partitioning or hash user keys Partitioning.
  • 49. AGGREGATOR TRANSFORMATION ****************************************** à It is Active and Connected type of Transformation . à It used to perform calculations such as SUMS, AVERAGES, COUNTS etc., on groups of data. à The Integration service stores the data group and Row data in Aggregate Cache. The Aggregator Transformation provides more advantages than the SQL, you can use the conditional clauses to filter rows. Components and Options of Aggregator Transformation: Aggregate Cache : The Integration Service stores data in the aggregate cache until it completes aggregate calculations. It stores group values in an index cache and row data in the data cache. Aggregate Expression : Enter an expression in an output port or Variable Port. The expression can include non-aggregate expressions and conditional clauses. Group by Port : This tells the integration service how to create groups. You can configure input, input/output or variable ports for the group.The integration service performs aggregate calculations and produces one row for each group. If you do not specify any group by ports, the integration service returns one row for all input rows. By default, the integration service returns the last row received for each group along with the result of aggregation. By using the FIRST function, you can specify the integration service to return the first row of the group. Sorted Input : This option can be used to improve the session performance. You can use this option only when the input to the aggregator transformation in sorted on group by ports. Note: By default, the Integration Service treats null values as NULL in aggregate functions. You canchange this by configuring the integration service. à When you run a session that uses an Aggregator transformation, the Integration Service createsIndex Cache and Data Cache in memory to process the transformation. If the Integration Service requires more space, it stores overflow values in cache files. Aggregator Index Cache : The value of index Cache can be increased upto 24MB.The index cache holds group information from the group by ports. If we are using Group By on DEPTNO, then this cache stores values 10, 20, 30 etc.
  • 50. Data Cache: DATA CACHE is generally larger than the AGGREGATOR INDEX CACHE. Columns in Data Cache: i) Variable ports if any ii) Non group by input/output ports. iii) Non group by input ports used in non-aggregate output expression. iv) Port containing aggregate function à It can also include one aggregate function nested within another aggregate function, such as: MAX ( COUNT ( ITEM ) ) Nested Aggregate Functions: You can include multiple single-level or multiple nested functions in different output ports in an Aggregator transformation. However, you cannot include both single-level and nested functions in an Aggregator transformation. Therefore, if an A ggregator transformation contains a single-level function in any output port, you cannot use a nested function in any other port in that transformation. When you include single-level and nested functions in the same Aggregator transformation. The Designermarks the mapping or mapplet invalid. If you need to create both single-level and nested functions, create separate Aggregator transformations. Null Values in Aggregate Functions: When you configure the Integration Service, you can choose how you want the Integration Service to handle null values in aggregate functions. You can choose to treat null values in aggregate functions as NULL or zero. By default, the Integration Service treats null values as NULL in aggregate functions. Mapping Flow Diagram : Best Practise : Whatever the Order of the Sorter Transformation Key positions, the same positions we should maintain the same order in Aggregator Transformation. EMP (source) SQ_EMP SORTER T / F AGGREGATOR T / F AGG_EMP (target)
  • 51. à Aggregator Transformation supports Conditional Sum.,ie., SUM( sal, sal<3000) , this means to sum all the Salaries below the 3000 margin. à Only twolevel Nested Aggregated Transformation is supported. ie., for eg :MAX ( COUNT ( ITEM )) is valid eg :MIN ( MAX ( COUNT ( ITEM ))) Process : Create Target Tables as per Mapping Flow Diagram using Sql*Plus as below : CREATE TABLE AGG_EMP ( DEPTNO NUMBER, DEPT_WISE_SAL NUMBER) ; and Import this table into Informatica Designer tool as a Target. Go to mappings menuà click on create, then open small window as : Click on Okà Drag and drop source and target tables into the workspace as shown below : Go to Transformation tabà click on Create, then small window will open as :
  • 52. Click on Createà click on Done. Drag and drop Deptno and Sal into Sorter Transformation from Source Qualifier Transformation table as shown below : Now, double click on header of Sorter transformation, then Edit Transformation window will open and click on Ports tab as shown below :
  • 53. Enable the Checkbox underK, corresponding DeptNo portname, which is marked by Red color à click on Ok. Go to Transformationà click on Create, then small window will open as : Click on Createà click on Doneà copy and drag the DeptNo and Sal from Sorter Transformation to Aggregator Transformation as shown below : Double click on Aggregator Transformationà click on Ports tabà opened window is like below: Enable the checkbox under the Group By column corresponding to DeptNo portname as shown in above.à click on Ok.
  • 54. Select Sal column in above figà click on Add New Port icon which is marked by Green colored circleà then new column is added under Port Name field and rename it as DEPT_WISE_SAL à disable the Input checkbox forthe corresponding newly created field à under expression field type SUM(sal) for the same fieldlike as below :
  • 55. Now, go to Properties tab on the same above windowà enable the checkbox corresponding Sorted Input filed, which is marked by Red Click on Ok. Now, connect DeptNo to DeptNo, Dept_Wise_Sal to the Dept_Wise_Sal from Aggregator Transformation to the Target table as shown below : Scenario -1 : Load the Target table using Conditional SUM feature for the same above mapping. Solution : all the steps remains same in this scenario, the only one change we need to do is : go to Edit Transformation of the above Aggregate Transformation table, under
  • 56. expression field for the DEPT_WISE_SAL port, type as SUM( sal, sal< 2500 ), which is marked by Red colored rectangle as shown below : The remaining process to create Session, creating workflow is same. Now go to backend and check the target table data as querying SELECT * FROM AGG_EMP; We get result as : Empno Dept_Wise_Sal -------- ------------------- 10 1250 20 3480 like this .. that means it gives the resultas Sum all the Salaries in Dept wise whichever Salaries are less than 2500. Performance Tuning Tips : 1) Use sorted input to decrease the use of aggregate caches: Sorted input reduces the amount of data cached during the session and im proves session performance. Use this option with the Sorter transformation to pass sorted data to the Aggregator transformation. 2) Limit connected input/output or output ports : Limit the number of connected input/output or output ports to reduce the amount of data the Aggregator transformation stores in the data cache.
  • 57. 3) Filter the data before aggregating it: If you use a Filter transformation in the mapping, place the transformation before the Aggregator transformation to reduce unnecessary aggregation. LOOK -UP TRANSFORMATION ************************************** à It isPassive and Connected / Unconnected type of transformation. à A connected Lookup transformation receives source data, performs a lookup, and returns data to the pipeline. à Cache the lookup source to improve performance. If you cache the lookup source, you can use a dynamic or static cache. By default, the lookup cache remains static and does not change during the session. With a dynamic cache, the Integration Service inserts or updates rows in the cache. à When you cache the target table as the lookup source, you can look up values in the cache to determine if the values exist in the target. The Lookup transformation marks rows to insert or update the target. Connected Look-Up Transformation : The following steps describe how the Integration Service processes a connected Lookup transformation : 1) A connected Lookup transformation receives input values directly from another transformation in the pipeline. 2) For each input row, the Integration Service queries the lookup source or cache based on the lookup ports and the condition in the transformation. 3) If the transformation is uncached or uses a static cache, the Integration Service returns values from the lookup query. 4) If the transformation uses a dynamic cache, the Integration Service inserts the row into the cache when it does not find the row in the cache. When the Integration Service finds the row in the cache, it updates the row in the cache or leaves it unchanged. It flags the row as insert, update, or no change. 5) The Integration Service passes return values from the query to the next transformation. If the transformation uses a dynamic cache, you can pass rows to a Filter or Router transformation to filter new rows to the target.
  • 58. Mapping Flow Diagram : ( Target table has the fields say DeptNo, Dname, EmpNo, Sal) à Create the Target table as per Mapping flow diagram. Till now, we had created the target table using SqlPlus, but now, we will see that how to create the table from Designer itself as below: Go to Designer toolà click on Target Designer icon from the tool baras goto Targets menuà click on Create, then small window opens as below : Click on Createà Click on Done. Then one empty table will got in the workspace as below : EMP (source table) SQ_EMP Look-Up Transformation LKP_EMP (target table)
  • 59. Double click on Header of that above empty table, then another window will open, as below : In that window, goto Columns tabà click on Add new column icon which is marked by red color symbol , here my target table has 4 ports, so I clicked 4 times on that red marked icon,then new window will open as below :
  • 60. Rename the column N ames as per our Target table in Mapping flow diagram and select the corresponding Data type, the final window will look like as:
  • 61. Click on Ok, then the target table appear in the Workspace as below: Now, source and target tables are ready and imported into Workspace of Designer tool. Go to Mappings menuà click on create and give the name as below window : Click on Okà Drag and drop Source and Target tables into the workspace of Mapping window as below : Go to Transformation menuà click on Create, then window will openà in that select Type as Lookup and name as T_LKP as :
  • 62. Click on Createà then another new window will open as below : In the above window, click on Import button, then two options will popup as shown below :
  • 63. In those two optionsà click on From Relational Table optionà it again open Import Tables window as shown belowà in that provide ODBC connection details for Source as below and click on Connectà Under Select Tables sub blockà select DEPT table and click on Ok. Click on Done on the above Create Transformation window. Now the mapping window will look like as:
  • 64. Connect DeptNo, E mpNo, S al from SQ_EMP to target table ie., LKP_EMPà drag and drop DeptNo from SQ_EMP to T_LKP table and connect the Dname fromT_LKP to LKP_EMP table as shown below: à Double click on header of T_LKP tableà new window will openà click on Condition tabà click on Add a New Condition icon marked by Red circleà make sure that Lookup Table Column as DEPTNO and TransformationPort as DEPTNO1as : à click on Ok.
  • 65. à Now connect theDNAME from Lookup (T_LKP) to target (LKP_EMP) as below: The remaining process for creating the Session, creating the Workflow is same., but at the session properties tab, we need to select or map the Lookup Transformation connection type to Source ODBC connection, which is similar what we done to map the Source Table to Source ODBC connection and Target table is mapped to Target ODBC connection, as of now. Note : This is the only one Transformation that we need to map the Transformation Table to ODBC connection at Session Properties window.That means only Look-Up transformation will interact and talk to Database, no other Transformation can do this.. UNCONNECTED LOOK -UP TRANSFORMATION ****************************************************** à An unconnected Lookup transformation is not connected to a source or target. A transformation in the pipeline calls the Lookup transformation with a :LKP expression. The unconnected Lookup transformation returns one column to the calling transformation. à The following steps describe the way the Integration Service processes an unconnected Lookup transformation: 1) An unconnected Lookup transformation receives input values from the result of a :LKP expression in another transformation, such as an Update Strategy transformation. 2) The Integration Service queries the lookup source or cache based on the lookup ports and condition in the transformation.
  • 66. 3) The Integration Service returns one value into the return port of the Lookup transformation. 4) The Lookup transformation passes the return value into the :LKP expression. Mapping Flow Diagram : ( the target table LKP_EMP has the fields say: DeptNo, Dname, EmpNo, Sal) It will not be connected in Mapping Pipeline. It can be called in any Transformation. It supports expressions. à Using Unconnected Look-up, we will get only one output port. à Create and Import Target table into Target designer window as per Mapping Flow Diagram ( which is exactly same as above procedure ) à goto Mappings menuà click on createà name it as : m_UnConnected_Lookupà click on Ok. à Drag and drop Source EMP table into workspace of Mapping window . à Goto Transformation menuà click on createà select Expression Transformation as type and name it as : T_EXPà click on Createà click on Done as below : EMP (source table) SQ_EMP Expression Transformation LKP_EMP (target table) UnConnected Look-Up Transformation
  • 67. à from SQ_EMP table, drag and drop DeptNo, EmpNo, Sal into Expression Transformation (T_EXP)as below : à Double click on Expression Transformationà click on Ports tabà select DeptNo column à click on Add New Port iconà name it as DNAMEà disable the Input( I ) port à click on Ok, as shown below:
  • 68. à connect all the columns from Expression Transformation table to Target table as below : à go to Transformation menuà click on Createà select Look-Up transformation as type and name it as T_LookUp as below:
  • 69. à Click on Createà new window will open as below: à Click on Source button à select the DEPT table from the sub windowà click on O., here there is no DEPT table, so we need to Import the table first from the databaseà click on Okà click on Done.
  • 70. à Double click on Look-Up transformationà click on ports tabà click onAdd New Port à name it as IN_DEPTNO and datatype as Decimalà disable the Output( o ) port as below: à from the above window only, click on Condition tabà click on Add a new condition icon à make sure that LookUp table column as DEPTNO and Transform ation Port as IN_DEPTNO à click on Ok.
  • 71. à Double click on Expression Transformationà click on Dname Expression icon as below and provide the expression as : LKP . T_LOOKUP( DeptNo )
  • 72. Click on Okà again click on Okà save it. Note :Suppose, If we have more than one mapping flow, we can possible to call the Same Unconnected Look-Up again. Performance Tuning Tips : 1) Add an index to the columns used in a lookup condition: If you have privileges to modify the database containing a lookup table, you can improve performance for both cached and uncached lookups. This is important for very large lookup tables. Since the Integration Service needs to query, sort, and compare values in these columns, the index needs to include every column used in a lookup condition. 2) Place conditions with an equality operator (=) first: If you include more than one lookup condition, place the conditions in the following order to optimize lookup performance: Equal to (=) Less than (<), greater than (>), less than or equal to (<=), greater than or equal to (>=) Not equal to (!=) 3) Cache small lookup tables: Improve session performance by caching small lookup tables. The result of the lookup query and processing is the same, whether or not you cache the lookup table. 4) Join tables in the database: If the lookup table is on the same database as the source table in the mapping and caching is not feasible, join the tables in the source database rather than using a Lookup transformation. 5) Use a persistent lookup cache for static lookups: If the lookup source does not change between sessions, configure the Lookup transformation to use a persistent lookup cache. The Integration S ervice then saves and reuses cache files from session to session, eliminating the time required to read the lookup source.
  • 73. 6) Call unconnected Lookup transformations with the :LKP reference qualifier: When you write an expression using the :LKP reference qualifier, you call unconnected Lookup transformations only. If you try to call a connected Lookup transformation, the Designer displays an error and marks the mapping invalid. 7) Configure a pipeline Lookup transformation to improve performance when processing a relational or flat file lookup source: You can create partitions to process a relational or flat file lookup source when you define the lookup source as a source qualifier. Configure a non-reusable pipeline Lookup transformation and create partitions in the partial pipeline that processes the lookup source. SEQUENCE GENERATOR TRANSFORMATION ******************************************************* OLTP Data OLAP Data in the first table Enum acts as a Primary Key, whereas, in the second table Primary key is W_ID, it is nothing but Warehouse Id. It is also called Surrogate Key. Surrogate Key :it is the key which is acting as a Primary Key in Datawarehouse. à Sequence Generator is useful to generate the Surrogate Key and itis useful to generate some random number as Surrogate Key Values. à It is Passive and Connected type of transformation. It contains two output ports that you can connect to one or more transformations. à The Integration Service generates a block of sequence numbers each time a block of rows enters a connected transformation. If you connect CURRVAL, the Integration Service processes one row in each block. à When NEXTVAL is connected to the input port of another transformation, the Integration Service generates a sequence of numbers. When CURRVAL is connected to Enum Location Year 101 Hyd 2002 101 Chennai 2006 101 Banglore 2011 W_ID Enum Ename Year 1 101 Hyd 2002 2 101 Chennai 2006 3 101 Banglore 2011
  • 74. the input port of another transformation, the Integration Service generates the NEXTVAL value plus the Increment By value. à You can make a Sequence Generator reusable, and use it in multiple mappings. You might reuse a Sequence Generator when you perform multiple loads to a single target. For example, if you have a large input file that you separate into three sessions running in parallel, use a Sequence Generator to generate primary key values. If you use different Sequence Generators, the Integration Service might generate duplicate key values. Instead, use the reusable Sequence Generator for all three sessions to provide a unique value for each target row. à You can complete the following tasks with a Sequence Generator transformation: 1) Create Keys 2) Replace Missing Values 3) Cycle through a Sequential range of Numbers Creating keys is easier thing, anyway we will see in below mapping. Replace Missing Values: Use the Sequence Generator transformation to replace missing keys by using NEXTVAL with the IIF and ISNULL functions. For example, to replace null values in the ORDER_NO column, you create a Sequence Generator transformation with the properties and drag the NEXTVAL port to an Expression transformation. In the Expression transformation, drag the ORDER_NO port into the transformation along with any other necessary ports. Then create an output port, ALL_ORDERS. Under ALL_ORDERS, you can then enter the follow ing expression to replace null orders: IIF ( ISNULL ( ORDER_NO ), NEXTVAL, ORDER_NO) à The Sequence Generator transformation has two output ports: NEXTVAL and CURRVAL. You cannot edit or delete these ports. Likewise, you cannot add ports to the transformation. Mapping Flow Diagram : ( Fields of Target table is : Row_Wid, EmpNo, Ename, Sal ) EMP (Source Table) Sequence Generator T/F SEQ_EMP (Target Table)
  • 75. à Create the Target table SEQ_EMP as per known technique. Now, we go to SqlPlus and create the Target table as per below : > CREATE TABLE SEQ_EMP ( Row_Id NUMBER, EmpNo NUMBER, Ename VARCHAR2(20), Sal NUMBER ) à import thattarget table into Designer toolà now, drag and drop Source and Target table into workspace of Mapping window as below : Click on Ok.à drag and drop Source and Target tables into Workspace of mapping window as below : à connect EmpNo, Sal from SQ_EMP to target SEQ_EMP table as below:
  • 76. à click on Transformation menuà click on Createà select type as Sequence Generator and name it as t_Seq as below : à Click on Create à click on Done , the mapping window workspace will look as: à Connect NextVal of transformation table to Row_Id of target table as below:
  • 77. à Save the mappingà the remaining process for creating the session and creating the Workflow is same here also and execute the workflow and observe the Target Table. JOINER TRANSFORMATION ************************************* à It is Active and Connected type of transformation. à Use the Joiner transformation to join source data from two related heterogeneous sources residing in different locations or file system s. You can also join data from the same source. The Joiner transformation joins sources with at least one matching column. Note: The Joiner transformation does not match null values. For example, if both EMP_ID1 and EMP_ID2 contain a row with a null value, the Integration Service does not consider them a match and does not join the two rows. To join rows with null values, replace null input with default values, and then join on the default values. à The Joiner transformation supports the following types of joins : 1) Normal 2) Master Outer 3) Detail Outer 4) Full Outer Note: A normal or master outer join performs faster than a full outer or detail outer join.
  • 78. Normal Join: With a normal join, the Integration Service discards all rows of data from the masterand detail source that do not match, based on the condition. Master Outer: A master outer join keeps all rows of data from the detail source and the matching rows from the master source. It discards the unmatched rows from the master source. Detail Outer : A detail outer join keeps all rows of data from the master source and the matching rows from the detail source. It discards the unmatched rows from the detail source. Full Outer Join: A full outer join keeps all rows of data from both the master and detail sources. à Generally, if we join two tables from source, obviously we will get two Source Qualifier Tables came out for individual Sources and combine them and pointing to Target table as below : But in the above case, both the sources are came from same database ie., Oracle, so we don t need to take separate Source Qualifier for each source table., instead of that we delete one source qualifier and take only one Source Qualifier and map both the sources to this and connect it to target as below diagram. Joining Two Tables Using single Source Qualifier : Mapping Flow Diagram : ( Here Target table has fields of EmpN O, Dname, Job, Sal, DeptNo ) EMP (Oracle Data base) DEPT (Oracle Data base) SQ_EMPDEPT JOINER_EMP (Target Table) EMP (Oracle Data base) DEPT (Oracle Data base) SQ_EMP SQ_DEPT JOINER_EMP (Target Table)
  • 79. à Normal Join in our Informatica is equivalent to Equi-Join as in Oracle Database level. à Master Outer Joins gives all the records of Detail Table( in this case ie., EMP ), not gives all the records of Master table ( in this case ie., Dept ) silly note : e . deptno = d. deptno (+) ---- > this is Right Outer Join and this gives all the Data of employee table (e) e . deptno (+) = d . deptno ----- > this is Left Outer Join and this gives all the Data of Department table (d). à create the Target table as per mapping flow diagram. Go to Sql Plus and create the table as below : > CREATE TABLE JOINER_EMP ( EmpNo Number, Dname Varchar2(20), Job Varchar2(20), Sal Number, DeptNo Number ); à go to Mappings menuà click on Createà name it as m_JOINER as below : à Drag and dropSource tables EMP and DEPT table and target table JOINER_EMP into the mapping window workspace and window will look like as below:
  • 80. à in our case, both source tables are coming from Same Database, so we don t need two separate Source Qualifiers as we discussed previously, so just delete any one Source Qualifier , but here iam deleting SQ_DEPT table, then the window will look as below: à Double click on Header of SQ_EMP table, the it will open one windowà in that click on Rename buttonà again small window will openà in that type name as SQ_EMPDEPT for better clarity purpose as shown below:
  • 81. à click on Ok. à Drag and drop DeptNo, Dname, Loc from DEPT table to SQ_EMP table then window will look like as:
  • 82. à Map the corresponding ports from SQ_EMPDEPT table to JOINER_EMP table as shown below: à now, double click on SQ_EMPDEPT transformationà new window will openà in that click on Properties tabà in Sql Query option click on corresponding down arrow mark, which is rectangled by Brown color as shown below:
  • 83. à After click on that down arrow markà another window will open as below: In that provide ODBC datasource as : RRITEC_SOURCE Username as : SOURCE_USER Password as : SOURCE_U SER à then click on Generate SQL buttonà then automatically one query will popup into the workspace of SQL sub-window , which is marked by Green Color in below window. Then window will look like as below:
  • 84. Click on Okà again click on Ok. The Remaining process to create Session , to create Workflow and running the Workflow is same here also. After ran the workflow, goto target table and preview the data whether it is populated correctly or not. Joining one Databse table and one Flat File : Silly N ote : whatever the table we drag and dropped into workspace as second attempt, then Inforamtica designer automatically treat that table as : MASTER à Open Notepad and type as: Deptno,Dname,Loc 10,Hr,Hyderabad 20,Sales,Chennai 30,Stock,Pune Click on Save with name as DEPT.txt and place this notepad file at below location: F: Informatica Powercenter 8.6.0 Server Infa_Shared SrcFiles. à We had already created the Target table of JOINER_EMP, so you can use the sam e table in this scenario also: à Import the Source table andsource Notepad file into Informatica Designer tool. à Goto Mapping Designer Windowà click on Mappings menuà click onCreate and name it asm_JOINER_EMP and drag and drop Source EMP, sourceFlat-File and target JOINER_EMP into the workspace asshown below:
  • 85. à goto Transformations menuà click on Createà select transformation type as Joiner and give name as T_JOIN as below: à click on Createà click on Done, then mapping window looks as below:
  • 86. à Drag and Drop Ename, Sal, DeptNo from SQ_EMP tableand Drag and drop Deptno, Dname from SQ_DEPT to to T_JOINER transformation table as below: à Double click on Joiner transformation, then Edit Transformations window will open à in that click on Ports tab, make sure that the table coming from Source side having less number of Records is treated as Master and Corresponding Master port is Enabled in checkbox format as below:
  • 87. à click on Condition tab on the same above window as: à click on Add a new condition icon which is marked by Red color, from the above window as and select and make sure that Condition as below Red mark
  • 88. à click on Ok à connect all the required ports from Joiner Transformation ports to Target table ports as below: à click on Save. à goto Powercenter wokflow manager windowà click on Workflows menuà click on Create à name it as : w_Hetro_Join , then window will look as:
  • 89. Click on Okà goto Tasks menuà click on Create and give name as S_Hetro_Join As below:
  • 90. à Click on Create , then window will open , in that need to select our mapping as below : à click on Okà click on Doneà then connect the Session and Workflow using Link Task and the workspace will look as: à now double click on Sessionà Edit Tasks window will openà in that click on Mapping window as shown below:
  • 91. à from the Navigator window at left side of above diagramà click on SQ_EMP and the corresponding Relational DB -connection is mapped to SOURCE as shown below:
  • 92. à similarly select SQ_DEPT from the leftside Navigator windowà then the corresponding properties will appear at right side à in that select Import Type as File Source File type as: Direct Source File Directory as : $PM Source File Dir Source File Name as : DEPT.txt à similarly select the target table from the leftside navigator and map the corresponding Relational DB -connection to TARGET à click on Okà and right click on Workflow and click on Start Workflow from Task option and goto Target table and observe the target table data.
  • 93. Joining Two Flat Files using Joiner Transformation : à Open notepad and type the below data as : Deptno,Ename,Sal 10,Kishor,1000 20,Divya,2000 30,Venkat,3000 Save the above notepad file with the name of EMP.txt à similarly open another notepad and type the below data as : Deptno,Dname,Loc 10,Hr,Hyderabad 20,Sales,Chennai 30,Stock,Pune Save the above notepad file with the name of DEPT.txt à Now place the above two notepadfiles in the below location as: F: Informatica Powercenter 8.6.0 Server Infa_Shared SrcFiles à Goto Mapping Designer windowà goto Mappings menuà click on C reate à give the mapping name as : m_Join_FlatFiles as below : Click on Okà now drag and drop the above two Source FlatFiles and the Target Table, then the mapping window looks as:
  • 94. à goto Transformations menuà click on Createà select the Transformation type as Joiner and give the name as : T_Join_FlatFilesas below : à click on Createà Click on Doneà then drag drop Deptno, Ename and Sal from SQ_EMP and Deptno, Dname from SQ_DEPT to the T_Join_FlatFiles transformation table as shown below:
  • 95. à Double click on Header of the Join transformation the Edit Transformation window will open , in that click on Condition tabà make sure the condition as in below window with red marked rectangle as: à click on Okà then connect the corresponding ports from Joiner Transfromation table to the target table as below: à the remaining process to create the Workflow and create the Task and setting the properties of the Task is same as above à after running the workflow, goto target table and view the data whether it is properly populated or not.
  • 96. Loading one Source Flat File data into another Target Flat File : à in the Designer tool, click on Target Designer iconà goto Targets menuà click on Create à name it as FlatFiles_Target and select aDatabase type as : Flat File as shown below : à click on Createà Click on Done. à Double click on the Header of that above target source file tableà then Edit Tables window will openà click on Columns tab à click on Add a New column to this table three times(because we need three fields at target table) then the above window will look as:
  • 97. à Name those fields as Deptno, Ename, Sal and set the appropriate datatypes as shown in below:
  • 98. à click on Okà click on Mapping Designer iconà goto Mappings menuà click on Create à give the mapping name as m_FlatFiles_Target as shown below : à Click on Okà now drag and drop source EMP.txt and target FlatFiles_Target.txt file into the mapping window workspace as shown below : à Goto W orkflow Manager toolà goto Workflows menuà click on Createà give the name as : w_FlatFiles_Target and click on Ok. à Create the session with the name of s_FlatFiles_Target and click on okà now connect the Workflow and Session using Link Taskà now run the Workflow. à After running the Workflow , goto backend location of F: Informatica Powercenter 8.6.0 Server Infa_Shared SrcFiles And observe that whether the Target Flat file is created or not . Loading a List of Flat Files into a Table Using Indirect Method : à Open a Notepad and type the below data as : Deptno,Ename,Sal 10,Kishor,1000 20,Divya,2000 30,Shreya,3000 Save this file with the name of EMP.txt à similarly, open one more Notepad and type as below : Deptno,Ename,Sal 40,Swetha,4000 50,Sweety,5000 60,Alekhya,6000 Save this file with the name of EMP1.txt Now place the above two Notepads in the below location of C:InformaticaPowerCenter8.6.0serverinfa_sharedSrcFiles
  • 99. à similarly, open one more notepad and type above two notepad s path location as below : C:InformaticaPowerCenter8.6.0serverinfa_sharedSrcFilesEMP.txt C:InformaticaPowerCenter8.6.0serverinfa_sharedSrcFilesEMP1.txt Save this notepad with the name of E MPLIST .txt and place this notepad also in the same location of SrcFiles. à Open Designer toolà Import EMP as a Sourceà create and import target table with the name of T_ListofFiles having fields of Deptno, Ename, Sal usingSqlPlus. à Create a Mapping with thename of m_Load_ListofFiles. Drag and drop source(EMP is enough) and target into the workspace and connect the corresponding Ports as shown below : à goto Workflow managerà create a workflow with the name of w_T_ListofFiles à create a Task with the name of s_T_ListofFiles à connect the Workflow and session à Double click on Sessionà click on mappings tabà select the SQ_EMP from the leftside navigator window and the corresponding properties will appear at rightside. In that select Source File Type as: Indirect Source File Directory as: $PM Source File Dir Source File Name as: EMPLIST.txt Click on Okà save it and run the workflow à goto table T_ListofFilesand observe the data.
  • 100. XML SOURCE QUALIFIER TRANSFORMATION ***************************************************** à It is an Active and Connected type of transformation . à Open notepad and type the below data as: < EMP > < ROWS > <DEPTNO>10</DEPTNO> <ENAME>RamReddy</ENAME> <SAL>1000</SAL> </ROWS> <ROWS> <DEPTNO>20</DEPTNO> <ENAME>Venkat</ENAME> <SAL>2000</SAL> </ROWS> <ROWS> <DEPTNO>30</DEPTNO> <ENAME>Divya</ENAME> <SAL>3000</SAL> </ROWS> </EMP> à Save the above file with the name as EMP.xml and place that file in the below location as : C:InformaticaPowerCenter8.6.0serverinfa_sharedSrcFiles à Open the Informatica Designer toolà click on Source Analyzer icon from the tool bar à click on Sources menuà click on Import From XML definitionà new window will openà from that navigate to our Src Folder and select the newly created XML file as shown below :
  • 101. à click on Openà then one warning window will came as below: à click on Yesà again another window will came as below: à if you have time, then read all those above options , otherwise simply click on Okà new window will open as :
  • 102. à click on Nextà new window will open , in that select Hierarchy Relationship radio button and select De-normalized XML views options as below:
  • 103. à click on Finishà if we go and observe in the Source Analyzer window, we will get the table structure as somewhat differentnormal database table structureas : à create a Target table with name as XML_EMP with the columns of DEPTNO, ENAME, SAL and import into Informatica Designer tool as: > CRE ATE TABLE XML_EMP ( Deptno Number(5), Ename Varchar2(30), Sal Number(10,2) ) ; à click on Mapping designer iconà goto Mappings menuà click on Createà name it as m_XML as: à click on Ok à from the leftside Navigator window, expand Sourcesà expand XML_File tabà drag and drop EMP into workarea , also drag and drop Target into workarea as:
  • 104. à connect corresponding ports from XML source qualifier to Target table as: à goto Workflow Manager toolà create a workflow with the name w_XML and create a Session with the name of s_XML and connect the both using Link Task. à double click on Sessionà click on Mappings tabà select XMLDSQ_EMP from leftside navigator window and provide Source File directory as: $PMSourceFileDir Source file name as EMP. XML Source file type as Direct as shown below:
  • 105. à select target XML_EMP table and map the proper Target connectionà click on Ok and run the Workflow. NORMALIZER TRANSFORMATION ******************************************** à It isActive and Connected Transformation à It is Useful to read Data from COBOL files. à It is associated with COBOL file. à It is useful to convert Single I/P record into Multiple O/P records. The below table is of De-Normalized Form : The below table is of Normalized Form : In the above table, the field Quarter is nothing but GCID (generated column Id) à Open one notepad and type as below : Year,Account_Type,Q1,Q2,Q3,Q4 2012,Savings,1000,2000,3000,4000 Save the notepad and place this file in the below location as C:InformaticaPowerCenter8.6.0serverinfa_sharedSrcFiles à Create a target table with the columns of Year, Account_Type,Quarter,Amount with a table name as T_EMP_NORMALIZER using SqlPlus as : Year Account_Type Quarter Amount ------- ------------------ ----------- ----------- 2012 Savings 1 1000 2012 Savings 2 2000 2012 Savings 3 3000 2012 Savings 4 4000 Year Account_Type Q1 Q2 Q3 Q4 ------- ----------------- ------------ ------------ ----------- ----------- 2012 Savings 1000 2000 3000 4000
  • 106. > CREATE TABLE T_EMP_NORMALIZER ( Year, Number(5), Account_Type Varchar2(30), Quarter Number(3), Amount Number(10,2) ) ; and import that table into Designer tool. à Goto Designer toolà click on Mapping Designer window iconà goto M appings menu à Create a mapping with the name of m_Normalizer as shown below: à click on Okà Drag and Drop Source flat file and target table into the mapping window workspace. à goto Transformations menuà click on Createà select transformationtype as Normalizer and give name as T_Normalizer as below: à click on Createà click on Doneà now the mapping window will look as:
  • 107. à Double click on Normalizer transformationà Edit transformations window will open as : In the above window , click on Normalizer tab and click on Add a new column to this table icon of three times to create three ports, which are marked by Red color in the above window, now the resultant window will look as:
  • 108. à change the Column Names and Datatypes and Occurs field as per below: à click on Okà then automatically in the Normalizer Transformation table has fileds in a different manner that what we regularly seen at the mapping window workspace as:
  • 109. à connect the ports from SQ_EMP to T_NORMALIZER and connect the ports from T_NORMALIZER table to target T_EMP_NORMALIZER as shown below: UPDATE STRATEGY TRANSFORMATION ************************************************ à It is Active and Connected Transformation à Target table should containPrimary Key Implimenting SCD1 : ( SCD is useful to maintain Current Data) à It should maintain Source Data and Target Data as Same. Mapping Flow Diagram : EMP SQ_EMP LookUp T / F Expression T / F Router T/F Expression T/F Update Strategy Transformation (for Update Flow) SCD1 (target) Expression T/F Update Strategy Transformation (for Insert Flow ) SCD1 (target) Sequence Transformation
  • 110. à Target table SCD1 having the Fields as EmpKey, EmpNo, Ename, Job, Sal Goto SqlPlus and create the target table as below : Ø CREATE TABLE SCD1 ( EmpKey Number(5), EmpNo Number(5), Ename Varchar2(30), Job Varchar2(30), Sal Number(5) ) ; Import this table into Informatica Designer tool. à Goto Mappings menuà click on createà give the name as m_SCD1_EMP as: à click on Ok--> drag and drop Source EMP table and Target table into the mapping window workspace as : à Goto Transformations menuà click on Createà select the transformation type as LookUp and name it as T_LKP_SCD_UPDATE as shown below:
  • 111. à click on Createà open one window , in that click on Target buttonà select our target table ie., SCD1 from the below available list of tables as shown below: à click on Ok à click on Doneà the mapping window will looks as: à Drag and drop EmpNo port form SQ_EMP to LookUp transformation table as :
  • 112. à Double click on LookUp transformation tableà click on Conditions tabà click on Add a New Condition iconà make sure the Condition as below : à Click on Ok . Note :if we get any errors after clickon Ok button in the above window, then just change the datatype of EMPNO to decimal in the LookUp transformation table. à Goto Transformations menuà click on Createà select type as Expression and give name as EXP_B4_ROUTER_UPDATE and drag and drop EmpNo, Ename, Job, Sal from Lookup Transformation to Expression Transformation table as below:
  • 113. à from the Look-Up transformation , drag and drop EmpKey to Expression transformation as: à double click on Expression Transformationà click on Ports tabà click on Add a new Port icon twice to create two ports as: NEW_RECORD ----- Output ------- exp : IIF (ISNULL (EMPKEY ), TRUE , FALSE ) UPDATE_RECORD ---Output---exp: IIF(NOT ISNULL (EMPKEY ), TRUE , FALSE ) as shown below:
  • 114. à click on Okà drop Router Transformationà drag and dropall the ports from Expression Transformation to Router Transformation as: à double click on Router Transformationà click on Groups tabà click on Add a New Group icon twice to create two groups and give conditions as: NEW_RECORD NEW_RECORD = TRUE UPDATE_RECORD UPDATE_RECORD = TRUE as shown below:
  • 115. à click on Okà now the mapping will look like as: New Record Flow : à drop three Transformations of type Expression, Update Strategy, Sequence Generator into the mapping window workspaceà drag and dropports of EMPNO1, ENAME1, JOB1, SAL1 fromNEW_RECORD groupinto Expression Transformation as:
  • 116. à drag and drop all the above four ports from Expression Transformation to Update Strategy transformation as:
  • 117. à double click on Update Strategy Transformationà click on Properties tabà provide Update Strategy expression as 0 as: à click on Okà connect all the ports from Update Strategy to Target table and connect NEXTVAL port from Sequence Generator Transformation to EMPKEY of target table as
  • 118. Update Record Flow : à drop Update Strategy, Expression transformations into the above Mapping Workspace à in the above Mapping, from the Router Transformation , drag and drop the ports EMP KEY3, EMPNO3, ENAME3, JOB3, SAL3 ofUPDATE_RECORD group into Expression Transformation as:
  • 119. à drag and drop all the ports from Expression Transformation to Update Strategy Transformation as:
  • 120. à double click on Update Strategy Transformationà click on Properties tabà set the Update Strategy Expression value as 1 as shown below: à click on Okà copy and paste the target table into the mapping workspace, for the purpose of Updated Record flowà and connect all the Ports from Update Strategy Transformation to another Target as :
  • 121. à goto Workflow Manager toolà create a Workflow with the name of w_SCD1 à create a Session with the name of s_SCD1 and map this task to our above Mapping m_SCD1_EMP à click on Okà connect Task and Workflow using Link Task. à double click on Sessionà click on Mapping tabà select SQ_EMP and set that to SOURCE connectionà select Target and set that to TARGET connectionà select Look-Up transformation and set that to TARGET connection à click on Properties tab à set the Treat Source Rows as Data Driven à click on Okà Save the Task and Run the Workflow and observe the Target table data. Testing : à goto SqlPlusà login as TARGET_USER; Ø INSERT INTO SOURCE_USER. EMP ( Empno, Ename, Job, Sal) VALUES ( 102, Kishor , Manager , 90000); Ø COMMIT ; > UPDATE SOURCE_USER . EMP SET Sal = 5001 WHERE Empno = 7839; > COMMIT ;
  • 122. à go and run the Workflow now and observe the Result. Exercise -1 : Try to Impliment above scenario using below Mapping Flow Diagram : Condition to be used in the Update Strategy is: IIF( ISNULL (EMPKEY ) , 0 , 1 ) Exercise 2 : Impliment the above SCD1 using inbuilt Slowly Changing Dimension Wizard. SLOWLY CHANGING DIMENSION 2 ************************************************ à Create a Target table with the name of EMP_SCD2 with columns EMPKEY (pk), EMPNO, ENAME, JOB,DEPTNO, SAL, VERSION and import into Informatica Designer tool as: > CREATE TABLE EMP_SCD2 ( Empkey number(5) Primary Key, Empno number(5), Ename varchar2(20), Job varchar2(20), Deptno number(5), Sal number(10,2), Version number(5) ) ; à create a Mapping with the name of m_SCD2 as : à click on Okà drag and drop Source and Target tables into the Mapping workspace as EMP SQ_EMP Update Strategy T/F Target Look Up T/F
  • 123. à drop Look-Up transformation and Use Look-Up table as Target table as : à click on Createà new window will open in that Click on Target buttonà select EMP_SCD2 ass below :
  • 124. à click on Okà from Source Qualifier transformation , drag and drop EMPNO into Look-Up transformation as: à double click on Look-Up transformationà click on Ports tabà and change Datatypes from Double to Decimal(if it is in Double datatypeinitially) à click on Condition tabà click on Add a new Conditionà make sure that condition as EmpNo = EmpNo1 as : à click on Okà drop Expression Transformation into mapping windowà drag and drop EMPNO, ENAME, JOB, SAL, DEPTNO from Source Qualifier Transformation into ExpressionTransformation as :
  • 125. à drag and drop EMPKEY, SAL, VERSION from Look-Up transformation into Expression transformation as: à double click on Expression Transformationà click on Ports tabà select last portà click on Add a new port to this transformation icon twice to create two portsà name those two ports as:
  • 126. à click on Okà now the mapping window will look as:
  • 127. à drop a Router Transformationà drag and drop all the ports from Expression Transformation to Router Transformation as: à double click on Router Transformationà click on Groups tabà click on Add a new group to this transformation icon twice to create two groups and name those two groups as NEWRECORD and UPDATEGROUP as below:
  • 128. à click on Ok New Record Flow : à drop Expression, Sequence Generator, Update Strategy transformations into mapping workspace as: à drag and drop EMPNO, ENAME, JOB, SAL, DEPTNO of New Record section from Router Transformation to Expression Transformation as: à double click on Expression Transformationà click on Ports tabà select last portà click on Add a new port iconà and rename that new port as VERSION , data type as : decimalà give value as 0 to the expression as:
  • 129. à click on Okà from Expression Transformation, drag and drop all the ports to Update Strategy Transformation à connect corresponding ports from Update Strategy to Target table à connect NEXTVAL port from Sequence Generator transformation to EMPKEY of target table as :
  • 130. Update Record Flow : à Drop Expression transformation and Update Strategy Expression into workspaceà from Update Record Group of Router transformation, drag and drop EMPNO3, ENAME3, JOB3, SAL3, DEPTNO3, VERSION3 into the expression Transformation as: à double click on Expression transformationà click on Ports tabà select last portà click on Add a new port iconà name that port as VERSION , datatype as Decimal, under expression type as VERSION3+1 à click on Ok
  • 131. à click on Okà from Expression Transformation, drag androp EMPNO, ENAME, JOB, SAL, DEPTNO, VERSION to Update Strategy Transformationà copy and paste the Target table in the mapping workspace to store Update Record related dataà connect corresponding ports from Update Strategy transformation to Target table à connect NEXTVAL port from Sequence Generator transformation to EMPKEY of Target table as : à create a Workflow with the name of w_SCD2 and create a task with the name of s_SCD2 à double click on Sessionà click on Mappings tabà make sure that Source and Target connections as Appropriateà make sure that Look-Up transformation connection is to Target and run the Workflow. Testing : à in the Source EMP table, add one more record using Insert statementà update any Existing record in the same source tableà now run the above workflow and observe the output. Exercise : Use the In-Built SCD2 wizard and built the Mapping.
  • 132. SLOWLY CHANGING DIMENSION - 3 ************************************************ à it is Useful to maintain Current data plus one level previous data. Process Using Wizard only : à go to Mappings menuà select Wizards optionà click on Slowly Changing Dimension option then one window will open in that provide name as SCD3 à select last radio button (Type-3 dimension) as:
  • 133. à click on Nextà select source table as : SOURCE_USER . EMP and give new target table name as EMP_SCD3 as:
  • 134. à click on Nextà select EMPNO column from Target table fields section click on Add>> button then that column will go and place in the Logical Key Fields section as :
  • 135. à select SAL column from Target Table Fields section and click on Add>> button and that column will go and place in the F ields to compare for changes section as :
  • 136. à click on Nextà click on Finishà then automatically one mapping will form and placed in the Mapping window workspace as:
  • 137. à save the mappingà click on Target Designer workspace iconà from the leftside Navigator window, expand targets tab à drag and drop EMP_SCD3 table into target designer workspaceà goto Targets menuà click on Generate and Execute SQL option , new window will open as: à in the above, click on Generate and Execute option , new small window will open , in that provide ODBC connection to target database, username and password for the target database need to give as: à click on Connectà click on Closeà save it à create a workflow with the name of w_SCD3 and create a task with the name of s_SCD3 and map the Session and Workflow using Link Task. à double click on Taskà provide database connections appropriately to both source and targetà click on Okà save the workflow and run the Workflow. Testing : à login to SOURCE_USER schema using Sql Plus, give statements as: > UPDATE EMP SET SAL=10000 WHERE EMPNO = 7839 ; > COMMIT ;
  • 138. Exercise : Use Router Transformation inplace of Filter Transformation and develop SCD3. SOURCE QUALIFIER TRANSFORMATION *************************************************** à Create a target table with name as EMP_DEPT with columns DNAME, DEPT_WISE_SALARY using SqlPlus and import into Informatica as : > CREATE TABLE EMP_DEPT ( Dname Varchar2(30), Dept_Wise_Salary Number(10,2) ); à Create a mapping with the name of M_SQ as: click on Ok.à drag and drop sources EMP and DEPT and target E MP_DEPT tables into the mapping window workspace as:
  • 139. à delete the above two default coming Source Qualifier tables then mapping window as: à go to Transformations menuà click on Createà select transformation type as Source Qualifier and give name as Emp_Dept as below: à click on Create then one small window will open and it asks us to select the tables for which we are going to create Source Qualifier transformation. Here we are created for both the tables., so I m selecting EMP and D EPT as below:
  • 140. à click on Okà click on Doneà automatically the newly created Source Qualifier Transformations ports are mapped with the corresponding ports from both EMP and DEPT tables as: à now connect DNAME and SAL from source qualifier transformation to target table as :
  • 141. à double click on Source Qualifierà click on Properties tab as: à click on SQL Q uery attribute down arrow mark which is marked by Red color as above, new SQL editor window will openà in that select ODB C data source as RRITEC_SOURCE Username as: SOURCE_USER Password as: SOURCE_USER As below :
  • 142. Click on Generate SQLà then automatically Query will be generated the workspace of SQL window shown as below: à click on Okà click on Okà the remaining procedure to create Session and Workflow is same here also.
  • 143. UNION TRANSFORMATION ********************************* à Connect to source database( SOURCE_USER ) using Sql Plus and create two tables as below: > CREATE TABLE emp1020 AS SELECT deptno, sal FROM scott.EMP WHERE deptno IN( 10,20 ); > CREATE TABLE emp2030 AS SELECT deptno, sal FROM scott.EMP WHERE deptno IN ( 20,30 ); à connect to target databse( TARGET_USER ) using Sql Plus and create a table as: > CREATE TABLE emp_Union102030 AS SELECT deptno, sal FROM scott.EMP WHERE 1 = 2; à now, import the above two sources and one target tables into Informatica designer. à create a mapping with the name of m_UNION as: Click on Okà drag and drop two sources and a target tables into mapping window as:
  • 144. à goto transformations menuà click on createà select type as UNION and name it as t_UNION as: à click on Createà click on Doneà under transformation we get two new options differently as shown below: à double click on Union Transformationà click on Groups tab as below:
  • 145. à click on Add a new group icon twice which is marked by Red colorà give names as below: à click on Group Ports tab
  • 146. à click on Add a new port icon twice and name those two ports as below: à click on Okà now the mapping window will look as:
  • 147. à connect all ports from SQ_EMP1020 with the emp1020 group of Union Transformation as: à similarly, connect all the ports from SQ_EMP2030 with the emp2030 group of Union Transformation as below:
  • 148. à now, connect all the output ports from Union Transformation to target table as below:
  • 149. STORED PROCEDURE TRANSFORMATION ************************************************* à Using Sql Plus, create a Target table with columns EMPNO, SAL, TAX and import that table into Informatica Designer tool as : CREATE TABLE emp_sproc ( EMPNO number(5), SAL number(10,2), TAX number(10,2) ); à using Sql Plus, connect to source database (SOURCE_USER ) , create a Stored Procedure as: > CREATE OR REPLACE PROCEDURE emp_tax ( Sal IN number, Tax OUT number ) IS BEGIN tax := sal * 0.1; END; à create a mapping with the nameof m_SPROC as: à click on Okà drag and dropsource EMP table and target emp_sproc into the mapping window as:
  • 150. à goto Transformations menuà click on Import Stored Procedure optionà new window will open , in that provide required details to import the stored procedure as: à click on Okà from the Source Qualifier transformation, connect SAL to stored procedure SAL port and connect TAX port of Stored Procedure to TAX port of target table as below: à connect EMPNO and SAL from source qualifier transformation to corresponding ports in Target table as below:
  • 151. à creating Task and creating Workflow and providing proper connections to Source and Target at session is same as here also. Create Mapping using Un-Connected Stored Procedure Transformation *********************************************************************** à create a Mapping with the name of m_SP_UnConnected as: à click on Okà drag and drop source and target tables into the mapping window workspace asbelow :
  • 152. à goto Transformations menuà click on createà select type as Expression and name it as t_SP_UnConnected as: à click on Createà click on Doneà drag and EMPNO and SAL from Source Qualifier transforamation toExpression Transformation as : à goto Transformationsà click on Import Stored Procedure optionà provide required detailsà select our Stored Procedureà click on Ok. à double click on Expression Transformationà click on Ports tab as:
  • 153. à click on Add a new port to this transformation icon which is marked by Red color in above window and name it as TAXà disable the Input port as: à click on Tax expression down arrow mark which is circled by red color , Expression Editor window will open and type the Formula as below:
  • 154. Note :in the above, PROC_RESULT is the keyword. à click on Validateà click on Okà click on Okà connect all the ports from Expression transformation to Target table as: à creating Session and creating workflow is same here also. TRANSACTION CONTROL TRANSFORMATION ***************************************************** à It is Active and Connected type of transformation à A transaction is the set of rows bound by commit or roll back rows. You can define a transaction based on a varying number of input rows. à In PowerCenter, you define transaction control at the following levels: Within a Mapping : Within a mapping, you use the Transaction Control transformation to define a transaction. You define transactions using an expression in a Transaction Control transformation. Based on the return value of the expression, you can choose to commit, roll back, or continue without any transaction changes. Within a Session :When you configure a session, you configure it for user-defined commit. You can choose to commit or roll back a transaction if the Integration Service fails to transform or write any row to the target. à When you run the session, the Integration Service evaluates the expression for each row that enters the transformation. When it evaluates a commit row, it commits all rows in the transaction to the target or targets. When the Integration Service evaluates a roll back row, it rolls back all rows in the transaction from the target or targets.
  • 155. à in the Transaction Control Transformation, we have 5 built-in variables. 1) TC_CONTINUE_TRANSACTION : The Integration Service does not perform any transaction change for this row. This is the default value of the expression. 2) TC_COMMIT_BEFORE : The Integration Service commits the transaction, begins a new transaction, and writes the current row to the target. The current row is in the new transaction. 3) TC_COMMIT_AFTER : The Integration Service writes the current row to the target, commits the transaction, and begins a new transaction. The current row is in the committed transaction. 4) TC_ROLLBACK_BEFORE : The Integration Service rolls back the current transaction, begins a new transaction, and writes the current row to the target. The current row is in the new transaction. 5) TC_ROLLBACK_AFTER : The Integration Service writes the current row to the target, rolls back the transaction, and begins a new transaction. The current row is in the rolled back transaction. Note : If the transaction control expression evaluates to a value other than commit, roll back, or continue, the Integration Service fails the session. à Create a Target table of name EMP_TCL with the columns EMPNO, ENAME, JOB, SAL and import into Informatica tool. > CREATE TABLE EMP_TCL ( EMPNO number(5), ENAME varchar2(20), JOB varchar2(2), SAL number(10,2) ) ; à goto mappings menuà click on Createà give name as m_TCL as : à click on Okà drag and drop source EMP table and target EMP_TCL table into mapping window workspace as:
  • 156. à goto transformations menuà click on Createà select type as Transaction Control and give name as t_TC as: à click on Create à click on Doneà drag and drop EMPNO, ENAME, JOB and SAL from SQ_EMP to EMP_TCL transformation table as : à double click on Transaction Control Transformationà click on Properties tab as:
  • 157. à click on Transaction Control Condition value dropdow n button, which is circled by red color the Expression Editor will openà in that give condition as below: à click on Validateà click on Okà click on Okà connect all the Ports from Transactional Control Transformation to Target table as:
  • 158. à Save the mappingà creating Session and creating the Workflow is same here also. Advanced Training of Informatica Mapping Parameters and Variables ************************************* PARAMETERS : à A mapping Parameter represents a constant value that we can define before running a session. à A mapping parameter retains the same value throughout the entire session. Intial and Default value : When w e declare a mapping parameter or variable in a mapping or a mapplet, we can enter an initial value. When the Integration Service needs an initial value, and we did not declare an initial value for the parameter or variable, the Integration Service uses a default value based on the data type of the parameter or variable. Data Default Value -------- ----------------- Numeric 0 String Empty String Date Time 1/1/1 à create a target table with the columns EMPNO, ENAME, JOB, SAL, COMM, DEPTNO and import into Informatica as :
  • 159. > CREATE TABLE emp_par_var ( EMPNO number(5), ENAME varchar2(20), JOB varchar2(20), SAL number(10,2), COMM number(10,2) ) ; à create a mapping with the name of m_MP as: Click on Okà drag and drop Source EMP table and target EMP_PAR_VAR table into mapping window workspace as: à goto Mappings menu à click on Parameters and Varaibles optionà new window will open as:
  • 160. à click on Add a new variable to this table à name the new field as $$DEPTNO à set the type as Parameters from drodpwnà set IsExpVar to FALSE as below:
  • 161. à click on Okà drop Filter Transformation into mapping workspaceà drag and drop EMPNO, ENAME, JOB, SAL, COMM from source qualifier transformation to Filter transformation as: à double click on Filter transformationà click on Properties tab à click on Filter Condition value drop downarrow mark, which is marked by red color then Expression Editor window will open as:
  • 162. à type DEPTNO= in the formula space, click on Variables tabà expand Mapping parameters tabà under that we will get our created Parameters. here, ie., $$DEPTNO. à double click on $$DEPTNO from the leftside navigator window, the formual window will now get as:
  • 163. à click on Validateà click on Okà click on Okà click on Okà drag and drop corresponding columns from Filter transformation to Target table as: à save the Mappingà creating the Session and creating Workflow is same here also. Creating Parameter File : navigate to below path as : F : Informatica PowerCenter 8.6.0 Server Infa_Shared BWparam à right click on free spaceà select New à click on Text Documentà name it as Deptno.prm à double click on that file and type as: [RRITEC .S_MP ] $$DEPTNO = 20 Save the file and close it( RRITEC --- > folder name; S_MP --- > session name)