Monday, February 11, 2008

Informatica ETL Naming Conventions

Informatica ETL Naming Conventions

These standards should be followed on all new development and rework of existing development objects. Any existing objects (i.e. Production) should only be modified if the opportunity presents itself (i.e. changes to mappings, workflows, etc.).

(Note: The list includes new or changed standards (noted with the entry being shaded) as a result of the PowerMart version upgrade to PowerCenter 7.1.3)

Repository Objects Naming Convention
Mapping Name m_ (Examples: m_RefreshSIQREPL, m_PERSON)
Session Name s_ (Example: s_m_PERSON)
Folder Name Folder names for new users will be their department followed by their username. Existing users’ folder names are typically their group followed by their user name. Folder Names may also be based on project, subject area, and promotion group.
Log Files .log OR .log
Bad Files .bad
Source-name & Target-name Do not make any distinction here - Informatica color-codes the sources and targets, so it's easy to tell. We do try to prefix these with a folder name, so that when shortcuts or global objects are created we can tell where they originally reside.
Workflow Names wflw_

Workflow Objects Naming Convention
Worklet Names wklt_
Command Line Names cmd_
Event Names evtw_ for a wait event
evtr_ for a raise event
Decision Names dcsn_
Control Names cntrl_
Email Names email_
Assignment Names asgmnt_
Timer Names tmr_

Transformation Objects Naming Convention
Advanced External Procedure Transformation aep_
Aggregator Transformation agg_ that leverages the expression or a name that describes the processing being done.
Expression Transformation exp_< meaningful name > that leverages the expression or a name that describes the processing being done.
Filter Transformation fil_< meaningful name > that describes the processing being done.
Joiner Transformation jnr_
Lookup Transformation lkp_
Mapplet mplt_< meaningful name >
Normalizer Transformation nrm_< source name >
Rank Transformation rnk_< meaningful name > that describes the processing being done.
Source Qualifier Transformation sq_
Update Strategy Transformation upd_
Stored Procedure Transformation sp_ (Ex_ProcedureName - External Procedure Transform can also be used)
Sequence Generator Transformation seq_
Router Transformation rtr_
Sorter Transformation srt_
Custom Transformation cust_ that leverages the expression or name that describes the processing being done.
Transaction Control Transformation trctrl_ that leverages the expression or name that describes the processing being done.
XML Source Qualifier Transformation XMLsq_
Application Source Qualifier Transformation Appsq_
MidStream XML Parser Transformation XMLprse_ that leverages the expression or name that describes the processing being done.
MidStream XML Generator Transformation XMLgen_ that leverages the expression or name that describes the processing being done.
Union Transformation un_ that leverages the expression or name that describes the processing being done.

Guidelines for naming ports and variables (Not all teams will use these variables in the same way so below are some basic guidelines for naming them).

Ports and Variables Naming Standards
Input Ports Prefixed with: in_fieldname
Output Ports Prefixed with out_fieldname
Variable Ports Prefixed with v_function

Informatica a2z

Informatica Mapping,Informatica Session Performance Tuning
When to optimize mappings The best time in the development cycle is after system testing. Focus on mapping-level optimization only after optimizing the target and source databases. Use Session Log to identify if the source, target or transformations are the performance bottleneck The session log contains thread summary records: MASTER> PETL_24018 Thread [READER_1_1_1] created for the read stage of partition point [SQ_test_all_text_data] has completed: Total Run Time = [11.703201] secs, Total Idle Time = [9.560945] secs, Busy Percentage = [18.304876]. MASTER> PETL_24019 Thread [TRANSF_1_1_1_1] created for the transformation stage of partition point [SQ_test_all_text_data] has completed: Total Run Time = [11.764368] secs, Total Idle Time = [0.000000] secs, Busy Percentage = [100.000000]. MASTER> PETL_24022 Thread [WRITER_1_1_1] created for the write stage of partition point(s) [test_all_text_data1] has completed: Total Run Time = [11.778229] secs, Total Idle Time = [8.889816] secs, Busy Percentage = [24.523321]. If one thread has busy percentage close to 100% and the others have significantly lower value, the thread with the high busy percentage is the bottleneck. In the example above, the session is transformation bound Identifying Target Bottlenecks The most common performance bottleneck occurs when the Informatica Server writes to a target database. You can identify target bottlenecks by configuring the session to write to a flat file target. If the session performance increases significantly when you write to a flat file, you have a target bottleneck. Consider performing the following tasks to increase performance: * Drop indexes and key constraints. * Increase checkpoint intervals. * Use bulk loading. * Use external loading. * Increase database network packet size. * Optimize target databases. Identifying Source Bottlenecks If the session reads from relational source, you can use a filter transformation, a read test mapping, or a database query to identify source bottlenecks: * Filter Transformation - measure the time taken to process a given amount of data, then add an always false filter transformation in the mapping after each source qualifier so that no data is processed past the filter transformation. You have a source bottleneck if the new session runs in about the same time. * Read Test Session - compare the time taken to process a given set of data using the session with that for a session based on a copy of the mapping with all transformations after the source qualifier removed with the source qualifiers connected to file targets. You have a source bottleneck if the new session runs in about the same time. * Extract the query from the session log and run it in a query tool. Measure the time taken to return the first row and the time to return all rows. If there is a significant difference in time, you can use an optimizer hint to eliminate the source bottleneck Consider performing the following tasks to increase performance: * Optimize the query. * Use conditional filters. * Increase database network packet size. * Connect to Oracle databases using IPC protocol. Identifying Mapping Bottlenecks If you determine that you do not have a source bottleneck, add an Always False filter transformation in the mapping before each target definition so that no data is loaded into the target tables. If the time it takes to run the new session is the same as the original session, you have a mapping bottleneck. You can also identify mapping bottlenecks by examining performance counters. Readfromdisk and Writetodisk Counters: If a session contains Aggregator, Rank, or Joiner transformations, examine each Transformation_readfromdisk and Transformation_writetodisk counter. If these counters display any number other than zero, you can improve session performance by increasing the index and data cache sizes. Note that if the session uses Incremental Aggregation, the counters must be examined during the run, because the Informatica Server writes to disk when saving historical data at the end of the run. Rowsinlookupcache Counter: A high value indicates a larger lookup, which is more likely to be a bottleneck Errorrows Counters: If a session has large numbers in any of the Transformation_errorrows counters, you might improve performance by eliminating the errors. BufferInput_efficiency and BufferOutput_efficiency counters: Any dramatic difference in a given set of BufferInput_efficiency and BufferOutput_efficiency counters indicates inefficiencies that may benefit from tuning. To enable collection of performance data: 1. Set session property Collect Performance Data (on Performance tab) 2. Increase the size of the Load Manager Shared Memory by 200kb for each session in shared memory that you configure to create performance details. If you create performance details for all sessions, multiply the MaxSessions parameter by 200kb to calculate the additional shared memory requirements. To view performance details in the Workflow Monitor: 1. While the session is running, right-click the session in the Workflow Monitor and choose Properties. 2. Click the Performance tab in the Properties dialog box. To view the performance details file: 1. Locate the performance details file. The Informatica Server names the file session_name.perf, and stores it in the same directory as the session log. 2. Open the file in any text editor. General Optimizations Single-pass reading - instead of reading the same data several times, combine mappings that use the same set of source data and use a single source qualifier Avoid unnecessary data conversions: For example, if your mapping moves data from an Integer column to a Decimal column, then back to an Integer column, the unnecessary data type conversion slows performance. Factor out common expressions/transformations and perform them before data pipelines split Optimize Char-Char and Char-Varchar Comparisons by using the Treat CHAR as CHAR On Read option in the Informatica Server setup so that the Informatica Server does not trim trailing spaces from the end of Char source fields. Eliminate Transformation Errors (conversion errors, conflicting mapping logic, and any condition set up as an error, such as null input). In large numbers they restrict performance because for each one, the Informatica Server pauses to determine its cause, remove the row from the data flow and write it to the session log or bad file. As a short term fix, reduce the tracing level on sessions that must generate large numbers of errors. Optimize lookups Cache lookups if o the number of rows in the lookup table is significantly less than the typical number of source rows o un-cached lookups perform poorly (e.g. they are based on a complex view or an unindexed table) Optimize Cached lookups o Use a persistent cache if the lookup data is static o Share caches if several lookups are based on the same data set o Reduce the number of cached rows using a SQL override with a restriction o Index the columns in the lookup ORDER BY Courtesy : ItToolBox

Posted by sami at 6:52 AM Links to this post

Labels: Informatica, Informatica Performance Tuning


Thursday
Increasing Informatica Server Performance
There are many factors that can affect session performance. Here are some of the reasons. Before doing tuning that is specific to Informatica: 1. Slow disk access on source and target databases, source and target file systems, as well as the Informatica Server and repository machines can slow session performance.So check hard disks on related machines. 2. Slow network connections can slow session performance.Therefore Improve network speed. 3. Check the Informatica Server and related machines run on high performance CPUs.Check CPUs on related machines. 4. Configure physical memory for the Informatica Server to minimize disk I/O. (Configure the physical memory for the Informatica Server machine to minimize paging to disk.) 5. Optimize database configuration 6. Staging areas. If you use a staging area, you force the Informatica Server to perform multiple passes on your data. Where possible, remove staging areas to improve performance. 7. You can run multiple Informatica Servers on separate systems against the same repository. Distributing the session load to separate Informatica Server systems increases performance. Informatica specific: - Transformation tuning - Using Caches - Avoiding Lookups by using DECODE for smaller and frequently used tables - Applying Filter at the earliest point in the data flow etc.

Posted by sami at 6:50 AM Links to this post

Labels: Informatica, Informatica Performance Tuning


Wednesday
New Features Of Informatica 8.1.1
1) The architecture of Power Center 8 has changed a lot; PC8 is service-oriented for modularity, scalability and flexibility. 2) The Repository Service and Integration Service (as replacement for Rep Server and Informatica Server) can be run on different computers in a network (so called nodes), even redundantly. 3) Management is centralized, that means services can be started and stopped on nodes via a central web interface. 4) Client Tools access the repository via that centralized machine, resources are distributed dynamically. 5) Running all services on one machine is still possible, of course. 6) It has a support for unstructured data which includes spreadsheets, email, Microsoft Word files, presentations and .PDF documents. It provides high availability, seamless fail over, eliminating single points of failure. 7) It has added performance improvements (To bump up systems performance, Informatica has added "push down optimization" which moves data transformation processing to the native relational database I/O engine whenever its is most appropriate.) 8) Informatica has now added more tightly integrated data profiling, cleansing, and matching capabilities. 9) Informatica has added a new web based administrative console. 10) Ability to write a Custom Transformation in C++ or Java. 11) Midstream SQL transformation has been added in 8.1.1, not in 8.1. 12) Dynamic configuration of caches and partitioning 13) Java transformation is introduced. 14) User defined functions 15) Power Center 8 release has "Append to Target file" feature.

Posted by sami at 8:14 PM Links to this post

Labels: Informatica


BASIC, INTERMEDIATE, AND ADVANCED TUNING PRACTICES

Informatica Map/Session Tuning Covers BASIC, INTERMEDIATE, AND ADVANCED TUNING PRACTICES. Table of Contents * Basic Guidelines * Intermediate Guidelines * Advanced INFORMATICA BASIC TUNING GUIDELINES The following points are high-level issues on where to go to perform "tuning" in Informatica's products. These are not NOT permanent instructions, nor are they the end-all solution. Just some items (which if tuned first) might make a difference. The level of skill available for certain items will cause the results to vary. To 'test' performance throughput it is generally recommended that the source set of data produce about 200,000 rows to process. Beyond this - the performance problems / issues may lie in the database - partitioning tables, dropping / re-creating indexes, striping raid arrays, etc... Without such a large set of results to deal with, you're average timings will be skewed by other users on the database, processes on the server, or network traffic. This seems to be an ideal test size set for producing mostly accurate averages. Try tuning your maps with these steps first. Then move to tuning the session, iterate this sequence until you are happy, or cannot achieve better performance by continued efforts. If the performance is still not acceptable,. then the architecture must be tuned (which can mean changes to what maps are created). In this case, you can contact us - we tune the architecture and the whole system from top to bottom. KEEP THIS IN MIND: In order to achieve optimal performance, it's always a good idea to strike a balance between the tools, the database, and the hardware resources. Allow each to do what they do best. Varying the architecture can make a huge difference in speed and optimization possibilities. 1. Utilize a database (like Oracle / Sybase / Informix / DB2 etc...) for significant data handling operations (such as sorts, groups, aggregates). In other words, staging tables can be a huge benefit to parallelism of operations. In parallel design - simply defined by mathematics, nearly always cuts your execution time. Staging tables have many benefits. Please see the staging table discussion in the methodologies section for full details. 2. Localize. Localize all target tables on to the SAME instance of Oracle (same SID), or same instance of Sybase. Try not to use Synonyms (remote database links) for anything (including: lookups, stored procedures, target tables, sources, functions, privileges, etc...). Utilizing remote links will most certainly slow things down. For Sybase users, remote mounting of databases can definitely be a hindrance to performance. 3. If you can - localize all target tables, stored procedures, functions, views, sequences in the SOURCE database. Again, try not to connect across synonyms. Synonyms (remote database tables) could potentially affect performance by as much as a factor of 3 times or more. 4. Remove external registered modules. Perform pre-processing / post-processing utilizing PERL, SED, AWK, GREP instead. The Application Programmers Interface (API) which calls externals is inherently slow (as of: 1/1/2000). Hopefully Informatica will speed this up in the future. The external module which exhibits speed problems is the regular expression module (Unix: Sun Solaris E450, 4 CPU's 2 GIGS RAM, Oracle 8i and Informatica). It broke speed from 1500+ rows per second without the module -to 486 rows per second with the module. No other sessions were running. (This was a SPECIFIC case - with a SPECIFIC map - it's not like this for all maps). 5. Remember that Informatica suggests that each session takes roughly 1 to 1 1/2 CPU's. In keeping with this - Informatica play's well with RDBMS engines on the same machine, but does NOT get along (performance wise) with ANY other engine (reporting engine, java engine, OLAP engine, java virtual machine, etc...) 6. Remove any database based sequence generators. This requires a wrapper function / stored procedure call. Utilizing these stored procedures has caused performance to drop by a factor of 3 times. This slowness is not easily debugged - it can only be spotted in the Write Throughput column. Copy the map, replace the stored proc call with an internal sequence generator for a test run - this is how fast you COULD run your map. If you must use a database generated sequence number, then follow the instructions for the staging table usage. If you're dealing with GIG's or Terabytes of information - this should save you lot's of hours tuning. IF YOU MUST - have a shared sequence generator, then build a staging table from the flat file, add a SEQUENCE ID column, and call a POST TARGET LOAD stored procedure to populate that column. Place the post target load procedure in to the flat file to staging table load map. A single call to inside the database, followed by a batch operation to assign sequences is the fastest method for utilizing shared sequence generators. 7. TURN OFF VERBOSE LOGGING. The session log has a tremendous impact on the overall performance of the map. Force over-ride in the session, setting it to NORMAL logging mode. Unfortunately the logging mechanism is not "parallel" in the internal core, it is embedded directly in to the operations. 8. Turn off 'collect performance statistics'. This also has an impact - although minimal at times - it writes a series of performance data to the performance log. Removing this operation reduces reliance on the flat file operations. However, it may be necessary to have this turned on DURING your tuning exercise. It can reveal a lot about the speed of the reader, and writer threads. 9. If your source is a flat file - utilize a staging. This way - you can also use SQL*Loader, BCP, or some other database Bulk-Load utility. Place basic logic in the source load map, remove all potential lookups from the code. At this point - if your reader is slow, then check two things: 1) if you have an item in your registry or configuration file which sets the "ThrottleReader" to a specific maximum number of blocks, it will limit your read throughput (this only needs to be set if the sessions have a demonstrated problems with constraint based loads) 2) Move the flat file to local internal disk (if at all possible). Try not to read a file across the network, or from a RAID device. Most RAID array's are fast, but Informatica seems to top out, where internal disk continues to be much faster. Here - a link will NOT work to increase speed - it must be the full file itself - stored locally. 10. Try to eliminate the use of non-cached lookups. By issuing a non-cached lookup, you're performance will be impacted significantly. Particularly if the lookup table is also a "growing" or "updated" target table - this generally means the indexes are changing during operation, and the optimizer looses track of the index statistics. Again - utilize staging tables if possible. In utilizing staging tables, views in the database can be built which join the data together; or Informatica's joiner object can be used to join data together - either one will help dramatically increase speed. 11. Separate complex maps - try to break the maps out in to logical threaded sections of processing. Re-arrange the architecture if necessary to allow for parallel processing. There may be smaller components doing individual tasks, however the throughput will be proportionate to the degree of parallelism that is applied. A discussion on HOW to perform this task is posted on the methodologies page, please see this discussion for further details. 12. BALANCE. Balance between Informatica and the power of SQL and the database. Try to utilize the DBMS for what it was built for: reading/writing/sorting/grouping/filtering data en-masse. Use Informatica for the more complex logic, outside joins, data integration, multiple source feeds, etc... The balancing act is difficult without DBA knowledge. In order to achieve a balance, you must be able to recognize what operations are best in the database, and which ones are best in Informatica. This does not degrade from the use of the ETL tool, rather it enhances it - it's a MUST if you are performance tuning for high-volume throughput. 13. TUNE the DATABASE. Don't be afraid to estimate: small, medium, large, and extra large source data set sizes (in terms of: numbers of rows, average number of bytes per row), expected throughput for each, turnaround time for load, is it a trickle feed? Give this information to your DBA's and ask them to tune the database for "wost case". Help them assess which tables are expected to be high read/high write, which operations will sort, (order by), etc... Moving disks, assigning the right table to the right disk space could make all the difference. Utilize a PERL script to generate "fake" data for small, medium, large, and extra large data sets. Run each of these through your mappings - in this manner, the DBA can watch or monitor throughput as a real load size occurs. 14. Be sure there is enough SWAP, and TEMP space on your PMSERVER machine. Not having enough disk space could potentially slow down your entire server during processing (in an exponential fashion). Sometimes this means watching the disk space as while your session runs. Otherwise you may not get a good picture of the space available during operation. Particularly if your maps contain aggregates, or lookups that flow to disk Cache directory - or if you have a JOINER object with heterogeneous sources. 15. Place some good server load monitoring tools on your PMServer in development - watch it closely to understand how the resources are being utilized, and where the hot spots are. Try to follow the recommendations - it may mean upgrading the hardware to achieve throughput. Look in to EMC's disk storage array - while expensive, it appears to be extremely fast, I've heard (but not verified) that it has improved performance in some cases by up to 50% . 16. SESSION SETTINGS. In the session, there is only so much tuning you can do. Balancing the throughput is important - by turning on "Collect Performance Statistics" you can get a good feel for what needs to be set in the session - or what needs to be changed in the database. Read the performance section carefully in the Informatica manuals. Basically what you should try to achieve is: OPTIMAL READ, OPTIMIAL THROUGHPUT, OPTIMAL WRITE. Over-tuning one of these three pieces can result in ultimately slowing down your session. For example: your write throughput is governed by your read and transformation speed, likewise, your read throughput is governed by your transformation and write speed. The best method to tune a problematic map, is to break it in to components for testing: Read Throughput, tune for the reader, see what the settings are, send the write output to a flat file for less contention - Check the "ThrottleReader" setting (which is not configured by default), increase the Default Buffer Size by a factor of 64k each shot - ignore the warning above 128k. If the Reader still appears to increase during the session, then stabilize (after a few thousand rows), then try increasing the Shared Session Memory from 12MB to 24MB. If the reader still stabilizes, then you have a slow source, slow lookups, or your CACHE directory is not on internal disk. If the reader's throughput continues to climb above where it stabilized, make note of the session settings. Check the Performance Statistics to make sure the writer throughput is NOT the bottleneck - you are attempting to tune the reader here, and don't want the writer threads to slow you down. Change the map target back to the database targets - run the session again. This time, make note of how much the reader slows down, it's optimal performance was reached with a flat file(s). This time - slow targets are the cause. NOTE: if your reader session to flat file just doesn't ever "get fast", then you've got some basic map tuning to do. Try to merge expression objects, set your lookups to unconnected (for re-use if possible), check your Index and Data cache settings if you have aggregation, or lookups being performed. Etc... If you have a slow writer, change the map to a single target table at a time - see which target is causing the "slowness" and tune it. Make copies of the original map, and break down the copies. Once the "slower" of the N targets is discovered, talk to your DBA about partitioning the table, updating statistics, removing indexes during load, etc... There are many database things you can do here. 17. Remove all other "applications" on the PMServer. Except for the database / staging database or Data Warehouse itself. PMServer plays well with RDBMS (relational database management system) - but doesn't play well with application servers, particularly JAVA Virtual Machines, Web Servers, Security Servers, application, and Report servers. All of these items should be broken out to other machines. This is critical to improving performance on the PMServer machine. INFORMATICA INTERMEDIATE TUNING GUIDELINES The following numbered items are for intermediate level tuning. After going through all the pieces above, and still having trouble, these are some things to look for. These are items within a map which make a difference in performance (We've done extensive performance testing of Informatica to be able to show these affects). Keep in mind - at this level, the performance isn't affected unless there are more than 1 Million rows (average size: 2.5 GIG of data). ALL items are Informatica MAP items, and Informatica Objects - none are outside the map. Also remember, this applies to PowerMart /PowerCenter (4.5x, 4.6x, / 1.5x, 1.6x) - other versions have NOT been tested. The order of these items is not relevant to speed. Each one has it's own impact on the overall performance. Again, throughput is also gauged by the number of objects constructed within a map/maplet. Sometimes it's better to sacrifice a little readability, for a little speed. It's the old paradigm, weighing readability and maintainability (true modularity) against raw speed. Make sure the client agrees with the approach, or that the data sets are large enough to warrant this type of tuning. BE AWARE: The following tuning tips range from "minor" cleanup to "last resort" types of things - only when data sets get very large, should these items be addressed, otherwise, start with the BASIC tuning list above, then work your way in to these suggestions. To understand the intermediate section, you'll need to review this tips. 1. Filter Expressions - try to evaluate them in a port expression. Try to create the filter (true/false) answer inside a port expression upstream. Complex filter expressions slow down the mapping. Again, expressions/conditions operate fastest in an Expression Object with an output port for the result. Turns out - the longer the expression, or the more complex - the more severe the speed degradation. Place the actual expression (complex or not) in an EXPRESSION OBJECT upstream from the filter. Compute a single numerical flag: 1 for true, 0 for false as an output port. Pump this in to the filter - you should see the maximum performance ability with this configuration. 2. Remove all "DEFAULT" value expressions where possible. Having a default value - even the "ERROR(xxx)" command slows down the session. It causes an unnecessary evaluation of values for every data element in the map. The only time you want to use "DEFAULT value is when you have to provide a default value for a specific port. There is another method: placing a variable with an IIF(xxxx, DEFAULT VALUE, xxxx) condition within an expression. This will always be faster (if assigned to an output port) than a default value. 3. Variable Ports are "slower" than Output Expressions. Whenever possible, use output expressions instead of variable ports. The variables are good for "static - and state driven" but do slow down the processing time - as they are allocated/reallocated each pass of a row through the expression object. 4. Datatype conversion - perform it in a port expression. Simply mapping a string to an integer, or an integer to a string will perform the conversion, however it will be slower than creating an output port with an expression like: to_integer(xxxx) and mapping an integer to an integer. It's because PMServer is left to decide if the conversion can be done mid-stream which seems to slow things down. 5. Unused Ports. Surprisingly, unused output ports have no affect on performance. This is a good thing. However in general it is good practice to remove any unused ports in the mapping, including variables. Unfortunately - there is no "quick" method for identifying unused ports. 6. String Functions. String functions definitely have an impact on performance. Particularly those that change the length of a string (substring, ltrim, rtrim, etc..). These functions slow the map down considerably, the operations behind each string function are expensive (de-allocate, and re-allocate memory within a READER block in the session). String functions are a necessary and important part of ETL, we do not recommend removing their use completely, only try to limit them to necessary operations. One of the ways we advocate tuning these, is to use "varchar/varchar2" data types in your database sources, or to use delimited strings in source flat files (as much as possible). This will help reduce the need for "trimming" input. If your sources are in a database, perform the LTRIM/RTRIM functions on the data coming in from a database SQL statement, this will be much faster than operationally performing it mid-stream. 7. IIF Conditionals are costly. When possible - arrange the logic to minimize the use of IIF conditionals. This is not particular to Informatica, it is costly in ANY programming language. It introduces "decisions" within the tool, it also introduces multiple code paths across the logic (thus increasing complexity). Therefore - when possible, avoid utilizing an IIF conditional - again, the only possibility here might be (for example) an ORACLE DECODE function applied to a SQL source. 8. Sequence Generators slow down mappings. Unfortunately there is no "fast" and easy way to create sequence generators. The cost is not that high for using a sequence generator inside of Informatica, particularly if you are caching values (cache at around 2000) - seems to be the suite spot. However - if at all avoidable, this is one "card" up a sleve that can be played. If you don't absolutely need the sequence number in the map for calculation reasons, and you are utilizing Oracle, then let SQL*Loader create the sequence generator for all Insert Rows. If you're using Sybase, don't specify the Identity column as a target - let the Sybase Server generate the column. Also - try to avoid "reusable" sequence generators - they tend to slow the session down further, even with cached values. 9. Test Expressions slow down sessions. Expressions such as: IS_SPACES tend slow down the mappings, this is a data validation expression which has to run through the entire string to determine if it is spaces, much the same as IS_NUMBER has to validate an entire string. These expressions (if at all avoidable) should be removed in cases where it is not necessary to "test" prior to conversion. Be aware however, that direct conversion without testing (conversion of an invalid value) will kill the transformation. If you absolutely need a test expression for numerics, try this: IIF( * 1 >= 0,,NULL) preferably you don't care if it's zero. An alpha in this expression should return a NULL to the computation. Yes - the IIF condition is slightly faster than the IS_NUMBER - because IS_NUMBER parses the entire string, where the multiplication operator is the actual speed gain. 10. Reduce Number of OBJETS in a map. Frequently, the idea of these tools is to make the "data translation map" as easy as possible. All to often, that means creating "an" (1) expression for each throughput/translation (taking it to an extreme of course). Each object adds computational overhead to the session and timings may suffer. Sometimes if performance is an issue / goal, you can integrate several expressions in to one expression object, thus reducing the "object" overhead. In doing so - you could speed up the map. 11. Update Expressions - Session set to Update Else Insert. If you have this switch turned on - it will definitely slow the session down - Informatica performs 2 operations for each row: update (w/PK), then if it returns a ZERO rows updated, performs an insert. The way to speed this up is to "know" ahead of time if you need to issue a DD_UPDATE or DD_INSERT inside the mapping, then tell the update strategy what to do. After which you can change the session setting to: INSERT and UPDATE AS UPDATE or UPDATE AS INSERT. 12. Multiple Targets are too slow. Frequently maps are generated with multiple targets, and sometimes multiple sources. This (despite first appearances) can really burn up time. If the architecture permits change, and the users support re-work, then try to change the architecture -> 1 map per target is the general rule of thumb. Once reaching one map per target, the tuning get's easier. Sometimes it helps to reduce it to 1 source and 1 target per map. But - if the architecture allows more modularization 1 map per target usually does the trick. Going further, you could break it up: 1 map per target per operation (such as insert vs update). In doing this, it will provide a few more cards to the deck with which you can "tune" the session, as well as the target table itself. Going this route also introduces parallel operations. For further info on this topic, see my architecture presentations on Staging Tables, and 3rd normal form architecture (Corporate Data Warehouse Slides). 13. Slow Sources - Flat Files. If you've got slow sources, and these sources are flat files, you can look at some of the following possibilities. If the sources reside on a different machine, and you've opened a named pipe to get them across the network - then you've opened (potentially) a can of worms. You've introduced the network speed as a variable on the speed of the flat file source. Try to compress the source file, FTP PUT it on the local machine (local to PMServer), decompress it, then utilize it as a source. If you're reaching across the network to a relational table - and the session is pulling many many rows (over 10,000) then the source system itself may be slow. You may be better off using a source system extract program to dump it to file first, then follow the above instructions. However, there is something your SA's and Network Ops folks could do (if necessary) - this is covered in detail in the advanced section. They could backbone the two servers together with a dedicated network line (no hubs, routers, or other items in between the two machines). At the very least, they could put the two machines on the same sub-net. Now, if your file is local to PMServer but is still slow, examine the location of the file (which device is it on). If it's not on an INTERNAL DISK then it will be slower than if it were on an internal disk (C drive for you folks on NT). This doesn't mean a unix file LINK exists locally, and the file is remote - it means the actual file is local. 14. Too Many Aggregators. If your map has more than 1 aggregator, chances are the session will run very very slowly - unless the CACHE directory is extremely fast, and your drive seek/access times are very high. Even still, placing aggregators end-to-end in mappings will slow the session down by factors of at least 2. This is because of all the I/O activity being a bottleneck in Informatica. What needs to be known here is that Informatica's products: PM / PC up through 4.7x are NOT built for parallel processing. In other words, the internal core doesn't put the aggregators on threads, nor does it put the I/O on threads - therefore being a single strung process it becomes easy for a part of the session/map to become a "blocked" process by I/O factors. For I/O contention and resource monitoring, please see the database/datawarehouse tuning guide. 15. Maplets containing Aggregators. Maplets are a good source for replicating data logic. But just because an aggregator is in a maplet doesn't mean it won't affect the mapping. The reason maplets don't affect speed of the mappings, is they are treated as a part of the mapping once the session starts - in other words, if you have an aggregator in a maplet, followed by another aggregator in a mapping you will still have the problem mentioned above in #14. Reduce the number of aggregators in the entire mapping (included maplets) to 1 if possible. If necessary, split the map up in to several different maps, use intermediate tables in the database if required to achieve processing goals. 16. Eliminate "too many lookups". What happens and why? Well - with too many lookups, your cache is eaten in memory - particularly on the 1.6 / 4.6 products. The end result is there is no memory left for the sessions to run in. The DTM reader/writer/transformer threads are not left with enough memory to be able to run efficiently. PC 1.7, PM 4.7 solve some of these problems by caching some of these lookups out to disk when the cache is full. But you still end up with contention - in this case, with too many lookups, you're trading in Memory Contention for Disk Contention. The memory contention might be worse than the disk contention, because the system OS end's up thrashing (swapping in and out of TEMP/SWAP disk space) with small block sizes to try to locate "find" your lookup row, and as the row goes from lookup to lookup, the swapping / thrashing get's worse. 17. Lookups & Aggregators Fight. The lookups and the aggregators fight for memory space as discussed above. Each requires Index Cache, and Data Cache and they "share" the same HEAP segments inside the core. See Memory Layout document for more information. Particularly in the 4.6 / 1.6 products and prior - these memory areas become critical, and when dealing with many many rows - the session is almost certain to cause the server to "thrash" memory in and out of the OS Swap space. If possible, separate the maps - perform the lookups in the first section of the maps, position the data in an intermediate target table - then a second map reads the target table and performs the aggregation (also provides the option for a group by to be done within the database)... Another speed improvement... INFORMATICA ADVANCED TUNING GUIDELINES The following numbered items are for advanced level tuning. Please proceed cautiously, one step at a time. Do not attempt to follow these guidelines if you haven't already made it through all the basic and intermediate guidelines first. These guidelines may require a level of expertise which involves System Administrators, Database Administrators, and Network Operations folks. Please be patient. The most important aspect of advanced tuning is to be able to pinpoint specific bottlenecks, then have the funding to address them. As usual - these advanced tuning guidelines come last, and are pointed at suggestions for the system. There are other advanced tuning guidelines available for Data Warehousing Tuning. You can refer to those for questions surrounding your hardware / software resources. 1. Break the mappings out. 1 per target. If necessary,1 per source per target. Why does this work? Well - eliminating multiple targets in a single mapping can greatly increase speed... Basically it's like this: one session per map/target. Each session establishes it's own database connection. Because of the unique database connection, the DBMS server can now handle the insert/update/delete requests in parallel against multiple targets. It also helps to allow each session to be specified for it's intended purpose (no longer mixing a data driven session with INSERTS only to a single target). Each session can then be placed in to a batch marked "CONCURRENT" if preferences allow. Once this is done, parallelism of mappings and sessions become obvious. A study of parallel processing has shown again and again, that the operations can be completed sometimes in half the time of their original counterparts merely by streaming them at the same time. With multiple targets in the same mapping, you're telling a single database connection to handle multiply diverse database statements - sometimes hitting this target, other times hitting that target. Think - in this situation it's extremely difficult for Informatica (or any other tool for that matter) to build BULK operations... even though "bulk" is specified in the session. Remember that "BULK" means this is your preference, and that the tool will revert to NORMAL load if it can't provide a BULK operation on a series of consecutive rows. Obviously, data driven then forces the tool down several other layers of internal code before the data actually can reach the database. 2. Develop maplets for complex business logic. It appears as if Maplets do NOT cause any performance hindrance by themselves. Extensive use of maplets means better, more manageable business logic. The maplets allow you to better break the mappings out. 3. Keep the mappings as simple as possible. Bury complex logic (if you must) in to a maplet. If you can avoid complex logic all together - then that would be the key. The old rule of thumb applies here (common sense) the straighter the path between two points, the shorter the distance... Translated as: the shorter the distance between the source qualifier and the target - the faster the data loads. 4. Remember the TIMING is affected by READER/TRANSFORMER/WRITER threads. With complex mappings, don't forget that each ELEMENT (field) must be weighed - in this light a firm understanding of how to read performance statistics generated by Informatica becomes important. In other words - if the reader is slow, then the rest of the threads suffer, if the writer is slow, same effect. A pipe is only as big as it's smallest diameter.... A chain is only as strong as it's weakest link. Sorry for the metaphors, but it should make sense. 5. Change Network Packet Size (for Sybase, MS-SQL Server & Oracle users). Maximum network packet size is a Database Wide Setting, which is usually defaulted at 512 bytes or 1024 bytes. Setting the maximum database packet size doesn't necessarily hurt any of the other users, it does however allow the Informatica database setting to make use of the larger packet sizes - thus transfer more data in a single packet faster. The typical 'best' settings are between 10k and 20k. In Oracle: you'll need to adjust the Listener.ORA and TNSNames.ORA files. Include the parameters: SDU, and TDU. SDU = Service Layer Data Buffer Size (in bytes), TDU = Transport Layer Data Buffer Size (in bytes). The SDU and TDU should be set equally. See the Informatica FAQ page for more information on setting these up. 6. Change to IPC Database Connection for Local Oracle Database. If PMServer and Oracle are running on the same server, use an IPC connection instead of a TCP/IP connection. Change the protocol in the TNSNames.ORA and Listener.ORA files, and restart the listener on the server. Be careful - this protocol can only be used locally, however the speed increases from using Inter Process Communication can be between 2x and 6x. IPC is utilized by Oracle, but is defined as a Unix System 5 standard specification. You can find more information on IPC by reading about in in Unix System 5 manuals. 7. Change Database Priorities for the PMServer Database User. Prioritizing the database login that any of the connections use (setup in Server Manager) can assist in changing the priority given to the Informatica executing tasks. These tasks when logged in to the database then can over-ride others. Sizing memory for these tasks (in shared global areas, and server settings) must be done if priorities are to be changed. If BCP or SQL*Loader or some other bulk-load facility is utilized, these priorities must also be set. This can greatly improve performance. Again, it's only suggested as a last resort method, and doesn't substitute for tuning the database, or the mapping processes. It should only be utilized when all other methods have been exhausted (tuned). Keep in mind that this should only be relegated to the production machines, and only in certain instances where the Load cycle that Informatica is utilizing is NOT impeding other users. 8. Change the UNIX User Priority. In order to gain speed, the Informatica Unix User must be given a higher priority. The Unix SA should understand what it takes to rank the Unix logins, and grant priorities to particular tasks. Or - simply have the pmserver executed under a super user (SU) command, this will take care of reprioritizing Informatica's core process. This should only be used as a last resort - once all other tuning avenues have been exhausted, or if you have a dedicated Unix machine on which Informatica is running. 9. Try not to load across the network. If at all possible, try to co-locate PMServer executable with a local database. Not having the database local means: 1) the repository is across the network (slow), 2) the sources / targets are across the network, also potentially slow. If you have to load across the network, at least try to localize the repository on a database instance on the same machine as the server. The other thing is: try to co-locate the two machines (pmserver and Target database server) on the same sub-net, even the same hub if possible. This eliminates unnecessary routing of packets all over the network. Having a localized database also allows you to setup a target table locally - which you can then "dump" following a load, ftp to the target server, and bulk-load in to the target table. This works extremely well for situations where append or complete refresh is taking place. 10. Set Session Shared Memory Settings between 12MB and 24MB. Typically I've seen folks attempt to assign a session large heaps of memory (in hopes it will increase speed). All it tends to do is slow down the processing. See the memory layout document for further information on how this affects Informatica and it's memory handling, and why simply giving it more memory doesn't necessarily provide speed. 11. Set Shared Buffer Block Size around 128k. Again, something that's covered in the memory layout document. This seems to be a "sweet spot" for handling blocks of rows in side the Informatica process. 12. MEMORY SETTINGS: The settings above are for an average configured machine, any machine with less than 10 GIG's of RAM should abide by the above settings. If you've got 12+ GIG's, and you're running only 1 to 3 sessions concurrently, go ahead and specify the Session Shared Memory size at 1 or 2 GIG's. Keep in mind that the Shared Buffer Block Size should be set in relative size to the Shared Memory Setting. If you set a Shared Mem to 124 MB, set the Buffer Block Size to 12MB, keep them in relative sizes. If you don't - the result will be more memory "handling" going on in the background, so less actual work will be done by Informatica. Also - this holds true for the simpler mappings. The more complex the mapping, the less likely you are to see a gain by increasing either buffer block size, or shared memory settings - because Informatica potentially has to process cells (ports/fields/values) inside of a huge memory block; thus resulting in a potential re-allocation of the whole block. 13. Use SNAPSHOTS with your Database. If you have dedicated lines, DS3/T1, etc... between servers, use a snapshot or Advanced Replication to get data out of the source systems and in to a staging table (duplicate of the source). Then schedule the snapshot before running processes. The RDBMS servers are built for this kind of data transfer - and have optimizations built in to the core to transfer data incrementally, or as a whole refresh. It may be to your advantage. Particularly if your sources contain 13 Million + rows. Place Informatica processes to read from the snapshot, at that point you can index any way you like - and increase the throughput speed without affecting the source systems. Yes - Snapshots only work if your sources are homogeneous to your targets (on the same type of system). 14. INCREASE THE DISK SPEED. One of the most common fallacies is that a Data Warehouse RDBMS needs only 2 controllers, and 13 disks to survive. This is fine if you're running less than 5 Million Rows total through your system, or your load window exceeds 5 hours. I recommend at least 4 to 6 controllers, and at least 50 disks - set on a Raid 0+1 array, spinning at 7200 RPM or better. If it's necessary, plunk the money down and go get an EMC device. You should see a significant increase in performance after installing or upgrading to such a configuration. 15. Switch to Raid 0+1. Raid Level 5 is great for redundancy, horrible for Data Warehouse performance, particularly on bulk loads. Raid 0+1 is the preferred method for data warehouses out there, and most folks find that the replication is just as safe as a Raid 5, particularly since the Hardware is now nearly all hot-swappable, and the software to manage this has improved greatly. 16. Upgrade your Hardware. On your production box, if you want Gigabytes per second throughput, or you want to create 10 indexes in 4 hours on 34 million rows, then add CPU power, RAM, and the Disk modifications discussed above. A 4 CPU machine just won't cut the mustard today for this size of operation. I recommend a minimum of 8 CPU's as a starter box, and increase to 12 as necessary. Again, this is for huge Data Warehousing systems - GIG's per hour/MB per Hour. A box with 4 CPU's is great for development, or for smaller systems (totalling less than 5 Million rows in the warehouse). However, keep in mind that Bus Speed is also a huge factor here. I've heard of a 4 CPU Dec-Alpha system outperforming a 6 CPU system... So what's the bottom line? Disk RPM's, Bus Speed, RAM, and # of CPU's. I'd say potentially in that order. Both Oracle and Sybase perform extremely well when given 6+ CPU's and 8 or 12 GIG's RAM setup on an EMC device at 7200 RPM with minimum of 4 controllers. Sorting - performance issues You can improve Aggregator transformation performance by using the Sorted Input option. When the Sorted Input option is selected, the Informatica Server assumes all data is sorted by group. As the Informatica Server reads rows for a group, it performs aggregate calculations as it reads. When necessary, it stores group information in memory. To use the Sorted Input option, you must pass sorted data to the Aggregator transformation. You can gain added performance with sorted ports when you partition the session. When Sorted Input is not selected, the Informatica Server performs aggregate calculations as it reads. However, since data is not sorted, the Informatica Server stores data for each group until it reads the entire source to ensure all aggregate calculations are accurate. For example, one Aggregator has the STORE_ID and ITEM Group By ports, with the Sorted Input option selected. When you pass the following data through the Aggregator, the Informatica Server performs an aggregation for the three records in the 101/battery group as soon as it finds the new group, 201/battery: STORE_ID ITEM QTY PRICE 101 'battery' 3 2.99 101 'battery' 1 3.19 101 'battery' 2 2.59 201 'battery' 4 1.59 201 'battery' 1 1.99 If you use the Sorted Input option and do not presort data correctly, the session fails. Sorted Input Conditions Do not use the Sorted Input option if any of the following conditions are true: * The aggregate expression uses nested aggregate functions. * The session uses incremental aggregation. * Input data is data-driven. You choose to treat source data as data driven in the session properties, or the Update Strategy transformation appears before the Aggregator transformation in the mapping. * The mapping is upgraded from PowerMart 3.5. If you use the Sorted Input option under these circumstances, the Informatica Server reverts to default aggregate behavior, reading all values before performing aggregate calculations. Pre-Sorting Data To use the Sorted Input option, you pass sorted data through the Aggregator. Data must be sorted as follows: * By the Aggregator group by ports, in the order they appear in the Aggregator transformation. * Using the same sort order configured for the session. If data is not in strict ascending or descending order based on the session sort order, the Informatica Server fails the session. For example, if you configure a session to use a French sort order, data passing into the Aggregator transformation must be sorted using the French sort order. If the session uses file sources, you can use an external utility to sort file data before starting the session. If the session uses relational sources, you can use the Number of Sorted Ports option in the Source Qualifier transformation to sort group by columns in the source database. Group By columns must be in the exact same order in both the Aggregator and Source Qualifier transformations Indexes - Make sure indexes are in place and tables have been analyzed Might be able to use index hints in source qualifier 3.I want to add some new columns in source table. I used this source in lot of mappings. If i add new colums in my source table it should be updated automatically in all mappings where i used this particular source. -> If you add the columns source table it will get reflected in all the mapping where that source table has been used. But if you want to use that added columns in mapping you need to modify that particular mapping.

Posted by sami at 3:13 PM Links to this post

Labels: Informatica, Informatica Performance Tuning


Friday
Informatica Session Failures and Recovering Sessions
Session Failures and Recovering Sessions Two types of errors occurs in the server - Non-Fatal - Fatal (a) Non-Fatal Errors It is an error that does not force the session to stop on its first occurrence. Establish the error threshold in the session property sheet with the stop on option. When you enable this option, the server counts Non-Fatal errors that occur in the reader, writer and transformations. Reader errors can include alignment errors while running a session in Unicode mode. Writer errors can include key constraint violations, loading NULL into the NOT-NULL field and database errors. Transformation errors can include conversion errors and any condition set up as an ERROR,. Such as NULL Input. Fatal Errors This occurs when the server can not access the source, target or repository. This can include loss of connection or target database errors, such as lack of database space to load data. If the session uses normalizer (or) sequence generator transformations, the server can not update the sequence values in the repository, and a fatal error occurs. Others Usages of ABORT function in mapping logic, to abort a session when the server encounters a transformation error. Stopping the server using pmcmd (or) Server Manager. Performing Recovery - When the server starts a recovery session, it reads the OPB_SRVR_RECOVERY table and notes the rowid of the last row commited to the target database. The server then reads all sources again and starts processing from the next rowid. - By default, perform recovery is disabled in setup. Hence it won’t make entries in OPB_SRVR_RECOVERY table. - The recovery session moves through the states of normal session schedule, waiting to run, Initializing, running, completed and failed. If the initial recovery fails, you can run recovery as many times. - The normal reject loading process can also be done in session recovery process. - The performance of recovery might be low, if o Mapping contain mapping variables o Commit interval is high. Un recoverable Sessions Under certain circumstances, when a session does not complete, you need to truncate the target and run the session from the beginning. Commit Intervals A commit interval is the interval at which the server commits data to relational targets during a session. (a) Target based commit - Server commits data based on the no of target rows and the key constraints on the target table. The commit point also depends on the buffer block size and the commit interval. - During a session, the server continues to fill the writer buffer, after it reaches the commit interval. When the buffer block is full, the Informatica server issues a commit command. As a result, the amount of data committed at the commit point generally exceeds the commit interval. - The server commits data to each target based on primary –foreign key constraints. (b) Source based commit - Server commits data based on the number of source rows. The commit point is the commit interval you configure in the session properties. - During a session, the server commits data to the target based on the number of rows from an active source in a single pipeline. The rows are referred to as source rows. - A pipeline consists of a source qualifier and all the transformations and targets that receive data from source qualifier. - Although the Filter, Router and Update Strategy transformations are active transformations, the server does not use them as active sources in a source based commit session. - When a server runs a session, it identifies the active source for each pipeline in the mapping. The server generates a commit row from the active source at every commit interval. - When each target in the pipeline receives the commit rows the server performs the commit. Reject Loading During a session, the server creates a reject file for each target instance in the mapping. If the writer of the target rejects data, the server writers the rejected row into the reject file. You can correct those rejected data and re-load them to relational targets, using the reject loading utility. (You cannot load rejected data into a flat file target) Each time, you run a session, the server appends a rejected data to the reject file. Locating the BadFiles $PMBadFileDir Filename.bad When you run a partitioned session, the server creates a separate reject file for each partition. Reading Rejected data Ex: 3,D,1,D,D,0,D,1094345609,D,0,0.00 To help us in finding the reason for rejecting, there are two main things. (a) Row indicator Row indicator tells the writer, what to do with the row of wrong data. Row indicator Meaning Rejected By 0 Insert Writer or target 1 Update Writer or target 2 Delete Writer or target 3 Reject Writer If a row indicator is 3, the writer rejected the row because an update strategy expression marked it for reject. (b) Column indicator Column indicator is followed by the first column of data, and another column indicator. They appears after every column of data and define the type of data preceding it Column Indicator Meaning Writer Treats as D Valid Data Good Data. The target accepts it unless a database error occurs, such as finding duplicate key. O Overflow Bad Data. N Null Bad Data. T Truncated Bad Data NOTE NULL columns appear in the reject file with commas marking their column. Correcting Reject File Use the reject file and the session log to determine the cause for rejected data. Keep in mind that correcting the reject file does not necessarily correct the source of the reject. Correct the mapping and target database to eliminate some of the rejected data when you run the session again. Trying to correct target rejected rows before correcting writer rejected rows is not recommended since they may contain misleading column indicator. For example, a series of “N” indicator might lead you to believe the target database does not accept NULL values, so you decide to change those NULL values to Zero. However, if those rows also had a 3 in row indicator. Column, the row was rejected b the writer because of an update strategy expression, not because of a target database restriction. If you try to load the corrected file to target, the writer will again reject those rows, and they will contain inaccurate 0 values, in place of NULL values. Why writer can reject ? - Data overflowed column constraints - An update strategy expression Why target database can Reject ? - Data contains a NULL column - Database errors, such as key violations Steps for loading reject file: - After correcting the rejected data, rename the rejected file to reject_file.in - The rejloader used the data movement mode configured for the server. It also used the code page of server/OS. Hence do not change the above, in middle of the reject loading - Use the reject loader utility Pmrejldr pmserver.cfg [folder name] [session name] Other points The server does not perform the following option, when using reject loader (a) Source base commit (b) Constraint based loading (c) Truncated target table (d) FTP targets (e) External Loading Multiple reject loaders You can run the session several times and correct rejected data from the several session at once. You can correct and load all of the reject files at once, or work on one or two reject files, load then and work on the other at a later time. External Loading You can configure a session to use Sybase IQ, Teradata and Oracle external loaders to load session target files into the respective databases. The External Loader option can increase session performance since these databases can load information directly from files faster than they can the SQL commands to insert the same data into the database. Method: When a session used External loader, the session creates a control file and target flat file. The control file contains information about the target flat file, such as data format and loading instruction for the External Loader. The control file has an extension of “*.ctl “ and you can view the file in $PmtargetFilesDir. For using an External Loader: The following must be done: - configure an external loader connection in the server manager - Configure the session to write to a target flat file local to the server. - Choose an external loader connection for each target file in session property sheet. Issues with External Loader: - Disable constraints - Performance issues o Increase commit intervals o Turn off database logging - Code page requirements - The server can use multiple External Loader within one session (Ex: you are having a session with the two target files. One with Oracle External Loader and another with Sybase External Loader) Other Information: - The External Loader performance depends upon the platform of the server - The server loads data at different stages of the session - The serve writes External Loader initialization and completing messaging in the session log. However, details about EL performance, it is generated at EL log, which is getting stored as same target directory. - If the session contains errors, the server continues the EL process. If the session fails, the server loads partial target data using EL. - The EL creates a reject file for data rejected by the database. The reject file has an extension of “*.ldr” reject. - The EL saves the reject file in the target file directory - You can load corrected data from the file, using database reject loader, and not through Informatica reject load utility (For EL reject file only) Configuring EL in session - In the server manager, open the session property sheet - Select File target, and then click flat file options Caches - server creates index and data caches in memory for aggregator ,rank ,joiner and Lookup transformation in a mapping. - Server stores key values in index caches and output values in data caches : if the server requires more memory ,it stores overflow values in cache files . - When the session completes, the server releases caches memory, and in most circumstances, it deletes the caches files . - Caches Storage overflow : - releases caches memory, and in most circumstances, it deletes the caches files . Caches Storage overflow : Transformation index cache data cache Aggregator stores group values stores calculations As configured in the based on Group-by ports Group-by ports. Rank stores group values as stores ranking information Configured in the Group-by based on Group-by ports . Joiner stores index values for stores master source rows . The master source table As configured in Joiner condition. Lookup stores Lookup condition stores lookup data that’s Information. Not stored in the index cache. Determining cache requirements To calculate the cache size, you need to consider column and row requirements as well as processing overhead. - server requires processing overhead to cache data and index information. Column overhead includes a null indicator, and row overhead can include row to key information. Steps: - first, add the total column size in the cache to the row overhead. - Multiply the result by the no of groups (or) rows in the cache this gives the minimum cache requirements . - For maximum requirements, multiply min requirements by 2. Location: -by default , the server stores the index and data files in the directory $PMCacheDir. -the server names the index files PMAGG*.idx and data files PMAGG*.dat. if the size exceeds 2GB,you may find multiple index and data files in the directory .The server appends a number to the end of filename(PMAGG*.id*1,id*2,etc). Aggregator Caches - when server runs a session with an aggregator transformation, it stores data in memory until it completes the aggregation. - when you partition a source, the server creates one memory cache and one disk cache and one and disk cache for each partition .It routes data from one partition to another based on group key values of the transformation. - server uses memory to process an aggregator transformation with sort ports. It doesn’t use cache memory .you don’t need to configure the cache memory, that use sorted ports. Index cache: #Groups ((Ã¥ column size) + 7) Aggregate data cache: #Groups ((Ã¥ column size) + 7) Rank Cache - when the server runs a session with a Rank transformation, it compares an input row with rows with rows in data cache. If the input row out-ranks a stored row,the Informatica server replaces the stored row with the input row. - If the rank transformation is configured to rank across multiple groups, the server ranks incrementally for each group it finds . Index Cache : #Groups ((Ã¥ column size) + 7) Rank Data Cache: #Group [(#Ranks * (Ã¥ column size + 10)) + 20] Joiner Cache: - When server runs a session with joiner transformation, it reads all rows from the master source and builds memory caches based on the master rows. - After building these caches, the server reads rows from the detail source and performs the joins - Server creates the Index cache as it reads the master source into the data cache. The server uses the Index cache to test the join condition. When it finds a match, it retrieves rows values from the data cache. - To improve joiner performance, the server aligns all data for joiner cache or an eight byte boundary. Index Cache : #Master rows [(Ã¥ column size) + 16) Joiner Data Cache: #Master row [(Ã¥ column size) + 8] Lookup cache: - When server runs a lookup transformation, the server builds a cache in memory, when it process the first row of data in the transformation. - Server builds the cache and queries it for the each row that enters the transformation. - If you partition the source pipeline, the server allocates the configured amount of memory for each partition. If two lookup transformations share the cache, the server does not allocate additional memory for the second lookup transformation. - The server creates index and data cache files in the lookup cache drectory and used the server code page to create the files. Index Cache : #Rows in lookup table [(Ã¥ column size) + 16) Lookup Data Cache: #Rows in lookup table [(Ã¥ column size) + 8] Transformations A transformation is a repository object that generates, modifies or passes data. (a) Active Transformation: a. Can change the number of rows, that passes through it (Filter, Normalizer, Rank ..) (b) Passive Transformation: a. Does not change the no of rows that passes through it (Expression, lookup ..) NOTE: - Transformations can be connected to the data flow or they can be unconnected - An unconnected transformation is not connected to other transformation in the mapping. It is called with in another transformation and returns a value to that transformation Reusable Transformations: When you are using reusable transformation to a mapping, the definition of transformation exists outside the mapping while an instance appears with mapping. All the changes you are making in transformation will immediately reflect in instances. You can create reusable transformation by two methods: (a) Designing in transformation developer (b) Promoting a standard transformation Change that reflects in mappings are like expressions. If port name etc. are changes they won’t reflect. Handling High-Precision Data: - Server process decimal values as doubles or decimals. - When you create a session, you choose to enable the decimal data type or let the server process the data as double (Precision of 15) Example: - You may have a mapping with decimal (20,0) that passes through. The value may be 40012030304957666903. If you enable decimal arithmetic, the server passes the number as it is. If you do not enable decimal arithmetic, the server passes 4.00120303049577 X 1019. If you want to process a decimal value with a precision greater than 28 digits, the server automatically treats as a double value. Mapplets When the server runs a session using a mapplets, it expands the mapplets. The server then runs the session as it would any other sessions, passing data through each transformations in the mapplet. If you use a reusable transformation in a mapplet, changes to these can invalidate the mapplet and every mapping using the mapplet. You can create a non-reusable instance of a reusable transformation. Mapplet Objects: (a) Input transformation (b) Source qualifier (c) Transformations, as you need (d) Output transformation Mapplet Won’t Support: - Joiner - Normalizer - Pre/Post session stored procedure - Target definitions - XML source definitions Types of Mapplets: (a) Active Mapplets - Contains one or more active transformations (b) Passive Mapplets - Contains only passive transformation Copied mapplets are not an instance of original mapplets. If you make changes to the original, the copy does not inherit your changes You can use a single mapplet, even more than once on a mapping. Ports Default value for I/P port - NULL Default value for O/P port - ERROR Default value for variables - Does not support default values Session Parameters This parameter represent values you might want to change between sessions, such as DB Connection or source file. We can use session parameter in a session property sheet, then define the parameters in a session parameter file. The user defined session parameter are: (a) DB Connection (b) Source File directory (c) Target file directory (d) Reject file directory Description: Use session parameter to make sessions more flexible. For example, you have the same type of transactional data written to two different databases, and you use the database connections TransDB1 and TransDB2 to connect to the databases. You want to use the same mapping for both tables. Instead of creating two sessions for the same mapping, you can create a database connection parameter, like $DBConnectionSource, and use it as the source database connection for the session. When you create a parameter file for the session, you set $DBConnectionSource to TransDB1 and run the session. After it completes set the value to TransDB2 and run the session again. NOTE: You can use several parameter together to make session management easier. Session parameters do not have default value, when the server can not find a value for a session parameter, it fails to initialize the session. Session Parameter File - A parameter file is created by text editor. - In that, we can specify the folder and session name, then list the parameters and variables used in the session and assign each value. - Save the parameter file in any directory, load to the server - We can define following values in a parameter o Mapping parameter o Mapping variables o Session parameters - You can include parameter and variable information for more than one session in a single parameter file by creating separate sections, for each session with in the parameter file. - You can override the parameter file for sessions contained in a batch by using a batch parameter file. A batch parameter file has the same format as a session parameter file Locale Informatica server can transform character data in two modes (a) ASCII a. Default one b. Passes 7 byte, US-ASCII character data (b) UNICODE a. Passes 8 bytes, multi byte character data b. It uses 2 bytes for each character to move data and performs additional checks at session level, to ensure data integrity. Code pages contains the encoding to specify characters in a set of one or more languages. We can select a code page, based on the type of character data in the mappings. Compatibility between code pages is essential for accurate data movement. The various code page components are - Operating system Locale settings - Operating system code page - Informatica server data movement mode - Informatica server code page - Informatica repository code page Locale (a) System Locale - System Default (b) User locale - setting for date, time, display (c) Input locale Mapping Parameter and Variables These represent values in mappings/mapplets. If we declare mapping parameters and variables in a mapping, you can reuse a mapping by altering the parameter and variable values of the mappings in the session. This can reduce the overhead of creating multiple mappings when only certain attributes of mapping needs to be changed. When you want to use the same value for a mapping parameter each time you run the session. Unlike a mapping parameter, a mapping variable represent a value that can change through the session. The server saves the value of a mapping variable to the repository at the end of each successful run and used that value the next time you run the session. Mapping objects: Source, Target, Transformation, Cubes, Dimension Debugger We can run the Debugger in two situations (a) Before Session: After saving mapping, we can run some initial tests. (b) After Session: real Debugging process Metadata Reporter: - Web based application that allows to run reports against repository metadata - Reports including executed sessions, lookup table dependencies, mappings and source/target schemas. Repository Types of Repository (a) Global Repository a. This is the hub of the domain use the GR to store common objects that multiple developers can use through shortcuts. These may include operational or application source definitions, reusable transformations, mapplets and mappings (b) Local Repository a. A Local Repository is with in a domain that is not the global repository. Use4 the Local Repository for development. (c) Standard Repository a. A repository that functions individually, unrelated and unconnected to other repository NOTE: - Once you create a global repository, you can not change it to a local repository - However, you can promote the local to global repository Server Concepts The Informatica server used three system resources (a) CPU (b) Shared Memory (c) Buffer Memory Informatica server uses shared memory, buffer memory and cache memory for session information and to move data between session threads. LM Shared Memory Load Manager uses both process and shared memory. The LM keeps the information server list of sessions and batches, and the schedule queue in process memory. Once a session starts, the LM uses shared memory to store session details for the duration of the session run or session schedule. This shared memory appears as the configurable parameter (LMSharedMemory) and the server allots 2,000,000 bytes as default. This allows you to schedule or run approximately 10 sessions at one time. DTM Buffer Memory The DTM process allocates buffer memory to the session based on the DTM buffer poll size settings, in session properties. By default, it allocates 12,000,000 bytes of memory to the session. DTM divides memory into buffer blocks as configured in the buffer block size settings. (Default: 64,000 bytes per block) Running a Session The following tasks are being done during a session 1. LM locks the session and read session properties 2. LM reads parameter file 3. LM expands server/session variables and parameters 4. LM verifies permission and privileges 5. LM validates source and target code page 6. LM creates session log file 7. LM creates DTM process 8. DTM process allocates DTM process memory 9. DTM initializes the session and fetches mapping 10. DTM executes pre-session commands and procedures 11. DTM creates reader, writer, transformation threads for each pipeline 12. DTM executes post-session commands and procedures 13. DTM writes historical incremental aggregation/lookup to repository 14. LM sends post-session emails Stopping and aborting a session - If the session you want to stop is a part of batch, you must stop the batch - If the batch is part of nested batch, stop the outermost batch - When you issue the stop command, the server stops reading data. It continues processing and writing data and committing data to targets - If the server cannot finish processing and committing data, you can issue the ABORT command. It is similar to stop command, except it has a 60 second timeout. If the server cannot finish processing and committing data within 60 seconds, it kills the DTM process and terminates the session. Recovery: - After a session being stopped/aborted, the session results can be recovered. When the recovery is performed, the session continues from the point at which it stopped. - If you do not recover the session, the server runs the entire session the next time. - Hence, after stopping/aborting, you may need to manually delete targets before the session runs again. NOTE: ABORT command and ABORT function, both are different. When can a Session Fail - Server cannot allocate enough system resources - Session exceeds the maximum no of sessions the server can run concurrently - Server cannot obtain an execute lock for the session (the session is already locked) - Server unable to execute post-session shell commands or post-load stored procedures - Server encounters database errors - Server encounter Transformation row errors (Ex: NULL value in non-null fields) - Network related errors When Pre/Post Shell Commands are useful - To delete a reject file - To archive target files before session begins Session Performance - Minimum log (Terse) - Partitioning source data. - Performing ETL for each partition, in parallel. (For this, multiple CPUs are needed) - Adding indexes. - Changing commit Level. - Using Filter trans to remove unwanted data movement. - Increasing buffer memory, when large volume of data. - Multiple lookups can reduce the performance. Verify the largest lookup table and tune the expressions. - In session level, the causes are small cache size, low buffer memory and small commit interval. - At system level, o WIN NT/2000-U the task manager. o UNIX: VMSTART, IOSTART. Hierarchy of optimization - Target. - Source. - Mapping - Session. - System. Optimizing Target Databases: - Drop indexes /constraints - Increase checkpoint intervals. - Use bulk loading /external loading. - Turn off recovery. - Increase database network packet size. Source level - Optimize the query (using group by, group by). - Use conditional filters. - Connect to RDBMS using IPC protocol. Mapping - Optimize data type conversions. - Eliminate transformation errors. - Optimize transformations/ expressions. Session: - concurrent batches. - Partition sessions. - Reduce error tracing. - Remove staging area. - Tune session parameters. System: - improve network speed. - Use multiple preservers on separate systems. - Reduce paging. Session Process Info server uses both process memory and system shared memory to perform ETL process. It runs as a daemon on UNIX and as a service on WIN NT. The following processes are used to run a session: (a) LOAD manager process: - starts a session · creates DTM process, which creates the session. (b) DTM process: - creates threads to initialize the session - read, write and transform data. - handle pre/post session opertions. Load manager processes: - manages session/batch scheduling. - Locks session. - Reads parameter file. - Expands server/session variables, parameters . - Verifies permissions/privileges. - Creates session log file. DTM process: The primary purpose of the DTM is to create and manage threads that carry out the session tasks. The DTM allocates process memory for the session and divides it into buffers. This is known as buffer memory. The default memory allocation is 12,000,000 bytes .it creates the main thread, which is called master thread .this manages all other threads. Various threads functions Master thread- handles stop and abort requests from load manager. Mapping thread- one thread for each session. Fetches session and mapping information. Compiles mapping. Cleans up after execution. Reader thread- one thread for each partition. Relational sources uses relational threads and Flat files use file threads. Writer thread- one thread for each partition writes to target. Transformation thread- one or more transformation for each partition. Note: When you run a session, the threads for a partitioned source execute concurrently. The threads use buffers to move/transform data.
Posted by sami at 7:17 PM Links to this post

Labels: Informatica


Thursday
Informatica Transformations - Overview

INFORMATICA TRANSFORMATIONS

Aggregator
Expression
Filter
Joiner
Lookup
Rank
Router
Sequence Generator
Stored Procedure
Source Qualifier
Update Strategy
XML source qualifier
External Procedure
Advanced External Procedure
Expression Transformation - You can use ET to calculate values in a single row before you write to the target - You can use ET, to perform any non-aggregate calculation - To perform calculations involving multiple rows, such as sums of averages, use the Aggregator. Unlike ET the Aggregator Transformation allow you to group and sort data Calculation To use the Expression Transformation to calculate values for a single row, you must include the following ports. - Input port for each value used in the calculation - Output port for the expression NOTE You can enter multiple expressions in a single ET. As long as you enter only one expression for each port, you can create any number of output ports in the Expression Transformation. In this way, you can use one expression transformation rather than creating separate transformations for each calculation that requires the same set of data.

Sequence Generator Transformation - Create keys - Replace missing values - This contains two output ports that you can connect to one or more transformations. The server generates a value each time a row enters a connected transformation, even if that value is not used. - There are two parameters NEXTVAL, CURRVAL - The SGT can be reusable - You can not edit any default ports (NEXTVAL, CURRVAL) SGT Properties - Start value - Increment By - End value - Current value - Cycle (If selected, server cycles through sequence range. Otherwise, Stops with configured end value) - Reset No of cached values NOTE - Reset is disabled for Reusable SGT - Unlike other transformations, you cannot override SGT properties at session level. This protects the integrity of sequence values generated. Aggregator Transformation Difference between Aggregator and Expression Transformation We can use Aggregator to perform calculations on groups. Where as the Expression transformation permits you to calculations on row-by-row basis only. The server performs aggregate calculations as it reads and stores necessary data group and row data in an aggregator cache. When Incremental aggregation occurs, the server passes new source data through the mapping and uses historical cache data to perform new calculation incrementally. Components - Aggregate Expression - Group by port - Aggregate cache When a session is being run using aggregator transformation, the server creates Index and data caches in memory to process the transformation. If the server requires more space, it stores overflow values in cache files. NOTE The performance of aggregator transformation can be improved by using “Sorted Input option”. When this is selected, the server assumes all data is sorted by group.

Incremental Aggregation - Using this, you apply captured changes in the source to aggregate calculation in a session. If the source changes only incrementally and you can capture changes, you can configure the session to process only those changes - This allows the sever to update the target incrementally, rather than forcing it to process the entire source and recalculate the same calculations each time you run the session. Steps: - The first time you run a session with incremental aggregation enabled, the server process the entire source. - At the end of the session, the server stores aggregate data from that session ran in two files, the index file and data file. The server creates the file in local directory. - The second time you run the session, use only changes in the source as source data for the session. The server then performs the following actions: (1) For each input record, the session checks the historical information in the index file for a corresponding group, then: If it finds a corresponding group – The server performs the aggregate operation incrementally, using the aggregate data for that group, and saves the incremental changes. Else Server create a new group and saves the record data (2) When writing to the target, the server applies the changes to the existing target. o Updates modified aggregate groups in the target o Inserts new aggregate data o Delete removed aggregate data o Ignores unchanged aggregate data o Saves modified aggregate data in Index/Data files to be used as historical data the next time you run the session. Each Subsequent time you run the session with incremental aggregation, you use only the incremental source changes in the session. If the source changes significantly, and you want the server to continue saving the aggregate data for the future incremental changes, configure the server to overwrite existing aggregate data with new aggregate data. Use Incremental Aggregator Transformation Only IF: - Mapping includes an aggregate function - Source changes only incrementally - You can capture incremental changes. You might do this by filtering source data by timestamp. External Procedure Transformation - When Informatica’s transformation does not provide the exact functionality we need, we can develop complex functions with in a dynamic link library or Unix shared library. - To obtain this kind of extensibility, we can use Transformation Exchange (TX) dynamic invocation interface built into Power mart/Power Center. - Using TX, you can create an External Procedure Transformation and bind it to an External Procedure that you have developed. - Two types of External Procedures are available COM External Procedure (Only for WIN NT/2000) Informatica External Procedure ( available for WINNT, Solaris, HPUX etc) Components of TX: (a) External Procedure This exists separately from Informatica Server. It consists of C++, VB code written by developer. The code is compiled and linked to a DLL or Shared memory, which is loaded by the Informatica Server at runtime. (b) External Procedure Transformation This is created in Designer and it is an object that resides in the Informatica Repository. This serves in many ways o This contains metadata describing External procedure o This allows an External procedure to be references in a mappingby adding an instance of an External Procedure transformation. All External Procedure Transformations must be defined as reusable transformations. Therefore you cannot create External Procedure transformation in designer. You can create only with in the transformation developer of designer and add instances of the transformation to mapping. Difference Between Advanced External Procedure And External Procedure Transformation Advanced External Procedure Transformation - The Input and Output functions occur separately - The output function is a separate callback function provided by Informatica that can be called from Advanced External Procedure Library. - The Output callback function is used to pass all the output port values from the Advanced External Procedure library to the informatica Server. - Multiple Outputs (Multiple row Input and Multiple rows output) - Supports Informatica procedure only - Active Transformation - Connected only External Procedure Transformation - In the External Procedure Transformation, an External Procedure function does both input and output, and it’s parameters consists of all the ports of the transformation. - Single return value ( One row input and one row output ) - Supports COM and Informatica Procedures - Passive transformation - Connected or Unconnected By Default, The Advanced External Procedure Transformation is an active transformation. However, we can configure this to be a passive by clearing “IS ACTIVE” option on the properties tab. LOOKUP Transformation - We are using this for lookup data in a related table, view or synonym - You can use multiple lookup transformations in a mapping - The server queries the Lookup table based in the Lookup ports in the transformation. It compares lookup port values to lookup table column values, bases on lookup condition. Types: (a) Connected (or) unconnected. (b) Cached (or) uncached . If you cache the lkp table , you can choose to use a dynamic or static cache . by default ,the LKP cache remains static and doesn’t change during the session .with dynamic cache ,the server inserts rows into the cache during the session ,information recommends that you cache the target table as Lookup .this enables you to lookup values in the target and insert them if they don’t exist.. You can configure a connected LKP to receive input directly from the mapping pipeline .(or) you can configure an unconnected LKP to receive input from the result of an expression in another transformation. Differences Between Connected and Unconnected Lookup: connected o Receives input values directly from the pipeline. o uses Dynamic or static cache o Returns multiple values o supports user defined default values. Unconnected o Recieves input values from the result of LKP expression in another transformation o Use static cache only. o Returns only one value. o Doesn’t supports user-defined default values. NOTES o Common use of unconnected LKP is to update slowly changing dimension tables. o Lookup components are (a) Lookup table. (b) Ports (c) Properties (d) condition. Lookup tables: This can be a single table, or you can join multiple tables in the same Database using a Lookup query override. You can improve Lookup initialization time by adding an index to the Lookup table. Lookup ports: There are 3 ports in connected LKP transformation (I/P,O/P,LKP) and 4 ports unconnected LKP(I/P,O/P,LKP and return ports). If you’ve certain that a mapping doesn’t use a Lookup ,port ,you delete it from the transformation. This reduces the amount of memory. Lookup Properties: you can configure properties such as SQL override .for the Lookup,the Lookup table name ,and tracing level for the transformation. Lookup condition: you can enter the conditions ,you want the server to use to determine whether input data qualifies values in the Lookup or cache . when you configure a LKP condition for the transformation, you compare transformation input values with values in the Lookup table or cache ,which represented by LKP ports .when you run session ,the server queries the LKP table or cache for all incoming values based on the condition. NOTE - If you configure a LKP to use static cache ,you can following operators =,>,<,>=,<=,!=. but if you use an dynamic cache only =can be used . - when you don’t configure the LKP for caching ,the server queries the LKP table for each input row .the result will be same, regardless of using cache However using a Lookup cache can increase session performance, by Lookup table, when the source table is large.

Performance tips:

- Add an index to the columns used in a Lookup condition. - Place conditions with an equality opertor (=) first.

- Cache small Lookup tables .

- Don’t use an ORDER BY clause in SQL override.

- Call unconnected Lookups with :LKP reference qualifier.

Normalizer Transformation

- Normalization is the process of organizing data. - In database terms ,this includes creating normalized tables and establishing relationships between those tables. According to rules designed to both protect the data, and make the database more flexible by eliminating redundancy and inconsistent dependencies.

- NT normalizes records from COBOL and relational sources ,allowing you to organizet the data according to you own needs.

- A NT can appear anywhere is a data flow when you normalize a relational source.

- Use a normalizer transformation, instead of source qualifier transformation when you normalize a COBOL source.

- The occurs statement is a COBOL file nests multiple records of information in a single record.

- Using the NT ,you breakout repeated data with in a record is to separate record into separate records. For each new record it creates, the NT generates an unique identifier. You can use this key value to join the normalized records.

Stored Procedure Transformation

- DBA creates stored procedures to automate time consuming tasks that are too complicated for standard SQL statements.

- A stored procedure is a precompiled collection of transact SQL statements and optional flow control statements, similar to an executable script.

- Stored procedures are stored and run with in the database. You can run a stored procedure with EXECUTE SQL statement in a database client tool, just as SQL statements. But unlike standard procedures allow user defined variables, conditional statements and programming features.

Usages of Stored Procedure

- Drop and recreate indexes.

- Check the status of target database before moving records into it.

- Determine database space.

- Perform a specialized calculation.

NOTE -

The Stored Procedure must exist in the database before creating a Stored Procedure Transformation, and the Stored procedure can exist in a source, target or any database with a valid connection to the server.

TYPES

- Connected Stored Procedure Transformation (Connected directly to the mapping)

- Unconnected Stored Procedure Transformation (Not connected directly to the flow of the mapping. Can be called from an Expression Transformation or other transformations)

Running a Stored Procedure

The options for running a Stored Procedure Transformation:

- Normal

- Pre load of the source

- Post load of the source

- Pre load of the target

- Post load of the target

You can run several stored procedure transformation in different modes in the same mapping. Stored Procedure Transformations are created as normal type by default, which means that they run during the mapping, not before or after the session. They are also not created as reusable transformations.

If you want to: Use below mode

Run a SP before/after the session Unconnected

Run a SP once during a session Unconnected

Run a SP for each row in data flow Unconnected/Connected

Pass parameters to SP and receive a single return value Connected

A normal connected SP will have an I/P and O/P port and return port also an output port, which is marked as ‘R’.

Error Handling

- This can be configured in server manager (Log & Error handling)

- By default, the server stops the session .

Rank Transformation

- This allows you to select only the top or bottom rank of data. You can get returned the largest or smallest numeric value in a port or group.

- You can also use Rank Transformation to return the strings at the top or the bottom of a session sort order. During the session, the server caches input data until it can perform the rank calculations.

- Rank Transformation differs from MAX and MIN functions, where they allows to select a group of top/bottom values, not just one value.

- As an active transformation, Rank transformation might change the number of rows passed through it.

Rank Transformation Properties

- Cache directory

- Top or Bottom rank

- Input/Output ports that contain values used to determine the rank.

Different ports in Rank Transformation

I - Input

O - Output

V - Variable

R - Rank

Rank Index

The designer automatically creates a RANKINDEX port for each rank transformation. The server uses this Index port to store the ranking position for each row in a group. The RANKINDEX is an output port only. You can pass the RANKINDEX to another transformation in the mapping or directly to a target.

Filter Transformation

- As an active transformation, the Filter Transformation may change the no of rows passed through it.

- A filter condition returns TRUE/FALSE for each row that passes through the transformation, depending on whether a row meets the specified condition.

- Only rows that return TRUE pass through this filter and discarded rows do not appear in the session log/reject files.

- To maximize the session performance, include the Filter Transformation as close to the source in the mapping as possible.

- The filter transformation does not allow setting output default values.

- To filter out row with NULL values, use the ISNULL and IS_SPACES functions.

Joiner Transformation

Source Qualifier: can join data origination from a common source database .

Joiner Transformation: Join tow related heterogeneous sources residing in different locations or File systems. To join more than two sources, we can add additional joiner transformations.

SESSION LOGS

Information that reside in a session log:

- Allocation of system shared memory

- Execution of Pre-session commands/ Post-session commands

- Session Initialization

- Creation of SQL commands for reader/writer threads

- Start/End timings for target loading

- Error encountered during session

- Load summary of Reader/Writer/ DTM statistics

Other Information

- By default, the server generates log files based on the server code page.

Thread Identifier

Ex: CMN_1039

Reader and Writer thread codes have 3 digit and Transformation codes have 4 digits.

The number following a thread name indicate the following:

(a) Target load order group number

(b) Source pipeline number

(c) Partition number

(d) Aggregate/ Rank boundary number.

Log File Codes

Error Codes Description

BR - Related to reader process, including ERP, relational and flat file.

CMN - Related to database, memory allocation

DBGR - Related to debugger

EP- External Procedure

LM - Load Manager

TM - DTM

REP - Repository

WRT - Writer

Load Summary

(a) Inserted

(b) Updated

(c) Deleted

(d) Rejected

Statistics details

(a) Requested rows shows the no of rows the writer actually received for the specified operation (b) Applied rows shows the number of rows the writer successfully applied to the target (Without Error) .

(c) Rejected rows show the no of rows the writer could not apply to the target

(d) Affected rows shows the no of rows affected by the specified operation

Detailed transformation statistics

The server reports the following details for each transformation in the mapping

(a) Name of Transformation

(b) No of I/P rows and name of the Input source

(c) No of O/P rows and name of the output target

(d) No of rows dropped

Tracing Levels

Normal - Initialization and status information, Errors encountered, Transformation errors, rows skipped, summarize session details (Not at the level of individual rows)

Terse - Initialization information as well as error messages, and notification of rejected data.

Verbose Init - Addition to normal tracing, Names of Index, Data files used and detailed transformation statistics.

Verbose Data - Addition to Verbose Init, Each row that passes in to mapping detailed transformation statistics.

NOTE : When you enter tracing level in the session property sheet, you override tracing levels configured for transformations in the mapping.


Posted by sami at 7:29 PM Links to this post

Labels: Informatica


Wednesday
Informatica Performance Tuning Tips and Tricks
BASIC, INTERMEDIATE, AND ADVANCED TUNING PRACTICES. Table of Contents * Basic Guidelines * Intermediate Guidelines * Advanced INFORMATICA BASIC TUNING GUIDELINES The following points are high-level issues on where to go to perform "tuning" in Informatica's products. These are not NOT permanent instructions, nor are they the end-all solution. Just some items (which if tuned first) might make a difference. The level of skill available for certain items will cause the results to vary. To 'test' performance throughput it is generally recommended that the source set of data produce about 200,000 rows to process. Beyond this - the performance problems / issues may lie in the database - partitioning tables, dropping / re-creating indexes, striping raid arrays, etc... Without such a large set of results to deal with, you're average timings will be skewed by other users on the database, processes on the server, or network traffic. This seems to be an ideal test size set for producing mostly accurate averages. Try tuning your maps with these steps first. Then move to tuning the session, iterate this sequence until you are happy, or cannot achieve better performance by continued efforts. If the performance is still not acceptable,. then the architecture must be tuned (which can mean changes to what maps are created). In this case, you can contact us - we tune the architecture and the whole system from top to bottom. KEEP THIS IN MIND: In order to achieve optimal performance, it's always a good idea to strike a balance between the tools, the database, and the hardware resources. Allow each to do what they do best. Varying the architecture can make a huge difference in speed and optimization possibilities. 1. Utilize a database (like Oracle / Sybase / Informix / DB2 etc...) for significant data handling operations (such as sorts, groups, aggregates). In other words, staging tables can be a huge benefit to parallelism of operations. In parallel design - simply defined by mathematics, nearly always cuts your execution time. Staging tables have many benefits. Please see the staging table discussion in the methodologies section for full details. 2. Localize. Localize all target tables on to the SAME instance of Oracle (same SID), or same instance of Sybase. Try not to use Synonyms (remote database links) for anything (including: lookups, stored procedures, target tables, sources, functions, privileges, etc...). Utilizing remote links will most certainly slow things down. For Sybase users, remote mounting of databases can definitely be a hindrance to performance. 3. If you can - localize all target tables, stored procedures, functions, views, sequences in the SOURCE database. Again, try not to connect across synonyms. Synonyms (remote database tables) could potentially affect performance by as much as a factor of 3 times or more. 4. Remove external registered modules. Perform pre-processing / post-processing utilizing PERL, SED, AWK, GREP instead. The Application Programmers Interface (API) which calls externals is inherently slow (as of: 1/1/2000). Hopefully Informatica will speed this up in the future. The external module which exhibits speed problems is the regular expression module (Unix: Sun Solaris E450, 4 CPU's 2 GIGS RAM, Oracle 8i and Informatica). It broke speed from 1500+ rows per second without the module -to 486 rows per second with the module. No other sessions were running. (This was a SPECIFIC case - with a SPECIFIC map - it's not like this for all maps). 5. Remember that Informatica suggests that each session takes roughly 1 to 1 1/2 CPU's. In keeping with this - Informatica play's well with RDBMS engines on the same machine, but does NOT get along (performance wise) with ANY other engine (reporting engine, java engine, OLAP engine, java virtual machine, etc...) 6. Remove any database based sequence generators. This requires a wrapper function / stored procedure call. Utilizing these stored procedures has caused performance to drop by a factor of 3 times. This slowness is not easily debugged - it can only be spotted in the Write Throughput column. Copy the map, replace the stored proc call with an internal sequence generator for a test run - this is how fast you COULD run your map. If you must use a database generated sequence number, then follow the instructions for the staging table usage. If you're dealing with GIG's or Terabytes of information - this should save you lot's of hours tuning. IF YOU MUST - have a shared sequence generator, then build a staging table from the flat file, add a SEQUENCE ID column, and call a POST TARGET LOAD stored procedure to populate that column. Place the post target load procedure in to the flat file to staging table load map. A single call to inside the database, followed by a batch operation to assign sequences is the fastest method for utilizing shared sequence generators. 7. TURN OFF VERBOSE LOGGING. The session log has a tremendous impact on the overall performance of the map. Force over-ride in the session, setting it to NORMAL logging mode. Unfortunately the logging mechanism is not "parallel" in the internal core, it is embedded directly in to the operations. 8. Turn off 'collect performance statistics'. This also has an impact - although minimal at times - it writes a series of performance data to the performance log. Removing this operation reduces reliance on the flat file operations. However, it may be necessary to have this turned on DURING your tuning exercise. It can reveal a lot about the speed of the reader, and writer threads. 9. If your source is a flat file - utilize a staging. This way - you can also use SQL*Loader, BCP, or some other database Bulk-Load utility. Place basic logic in the source load map, remove all potential lookups from the code. At this point - if your reader is slow, then check two things: 1) if you have an item in your registry or configuration file which sets the "ThrottleReader" to a specific maximum number of blocks, it will limit your read throughput (this only needs to be set if the sessions have a demonstrated problems with constraint based loads) 2) Move the flat file to local internal disk (if at all possible). Try not to read a file across the network, or from a RAID device. Most RAID array's are fast, but Informatica seems to top out, where internal disk continues to be much faster. Here - a link will NOT work to increase speed - it must be the full file itself - stored locally. 10. Try to eliminate the use of non-cached lookups. By issuing a non-cached lookup, you're performance will be impacted significantly. Particularly if the lookup table is also a "growing" or "updated" target table - this generally means the indexes are changing during operation, and the optimizer looses track of the index statistics. Again - utilize staging tables if possible. In utilizing staging tables, views in the database can be built which join the data together; or Informatica's joiner object can be used to join data together - either one will help dramatically increase speed. 11. Separate complex maps - try to break the maps out in to logical threaded sections of processing. Re-arrange the architecture if necessary to allow for parallel processing. There may be smaller components doing individual tasks, however the throughput will be proportionate to the degree of parallelism that is applied. A discussion on HOW to perform this task is posted on the methodologies page, please see this discussion for further details. 12. BALANCE. Balance between Informatica and the power of SQL and the database. Try to utilize the DBMS for what it was built for: reading/writing/sorting/grouping/filtering data en-masse. Use Informatica for the more complex logic, outside joins, data integration, multiple source feeds, etc... The balancing act is difficult without DBA knowledge. In order to achieve a balance, you must be able to recognize what operations are best in the database, and which ones are best in Informatica. This does not degrade from the use of the ETL tool, rather it enhances it - it's a MUST if you are performance tuning for high-volume throughput. 13. TUNE the DATABASE. Don't be afraid to estimate: small, medium, large, and extra large source data set sizes (in terms of: numbers of rows, average number of bytes per row), expected throughput for each, turnaround time for load, is it a trickle feed? Give this information to your DBA's and ask them to tune the database for "wost case". Help them assess which tables are expected to be high read/high write, which operations will sort, (order by), etc... Moving disks, assigning the right table to the right disk space could make all the difference. Utilize a PERL script to generate "fake" data for small, medium, large, and extra large data sets. Run each of these through your mappings - in this manner, the DBA can watch or monitor throughput as a real load size occurs. 14. Be sure there is enough SWAP, and TEMP space on your PMSERVER machine. Not having enough disk space could potentially slow down your entire server during processing (in an exponential fashion). Sometimes this means watching the disk space as while your session runs. Otherwise you may not get a good picture of the space available during operation. Particularly if your maps contain aggregates, or lookups that flow to disk Cache directory - or if you have a JOINER object with heterogeneous sources. 15. Place some good server load monitoring tools on your PMServer in development - watch it closely to understand how the resources are being utilized, and where the hot spots are. Try to follow the recommendations - it may mean upgrading the hardware to achieve throughput. Look in to EMC's disk storage array - while expensive, it appears to be extremely fast, I've heard (but not verified) that it has improved performance in some cases by up to 50% . 16. SESSION SETTINGS. In the session, there is only so much tuning you can do. Balancing the throughput is important - by turning on "Collect Performance Statistics" you can get a good feel for what needs to be set in the session - or what needs to be changed in the database. Read the performance section carefully in the Informatica manuals. Basically what you should try to achieve is: OPTIMAL READ, OPTIMIAL THROUGHPUT, OPTIMAL WRITE. Over-tuning one of these three pieces can result in ultimately slowing down your session. For example: your write throughput is governed by your read and transformation speed, likewise, your read throughput is governed by your transformation and write speed. The best method to tune a problematic map, is to break it in to components for testing: Read Throughput, tune for the reader, see what the settings are, send the write output to a flat file for less contention - Check the "ThrottleReader" setting (which is not configured by default), increase the Default Buffer Size by a factor of 64k each shot - ignore the warning above 128k. If the Reader still appears to increase during the session, then stabilize (after a few thousand rows), then try increasing the Shared Session Memory from 12MB to 24MB. If the reader still stabilizes, then you have a slow source, slow lookups, or your CACHE directory is not on internal disk. If the reader's throughput continues to climb above where it stabilized, make note of the session settings. Check the Performance Statistics to make sure the writer throughput is NOT the bottleneck - you are attempting to tune the reader here, and don't want the writer threads to slow you down. Change the map target back to the database targets - run the session again. This time, make note of how much the reader slows down, it's optimal performance was reached with a flat file(s). This time - slow targets are the cause. NOTE: if your reader session to flat file just doesn't ever "get fast", then you've got some basic map tuning to do. Try to merge expression objects, set your lookups to unconnected (for re-use if possible), check your Index and Data cache settings if you have aggregation, or lookups being performed. Etc... If you have a slow writer, change the map to a single target table at a time - see which target is causing the "slowness" and tune it. Make copies of the original map, and break down the copies. Once the "slower" of the N targets is discovered, talk to your DBA about partitioning the table, updating statistics, removing indexes during load, etc... There are many database things you can do here. 17. Remove all other "applications" on the PMServer. Except for the database / staging database or Data Warehouse itself. PMServer plays well with RDBMS (relational database management system) - but doesn't play well with application servers, particularly JAVA Virtual Machines, Web Servers, Security Servers, application, and Report servers. All of these items should be broken out to other machines. This is critical to improving performance on the PMServer machine. INFORMATICA INTERMEDIATE TUNING GUIDELINES The following numbered items are for intermediate level tuning. After going through all the pieces above, and still having trouble, these are some things to look for. These are items within a map which make a difference in performance (We've done extensive performance testing of Informatica to be able to show these affects). Keep in mind - at this level, the performance isn't affected unless there are more than 1 Million rows (average size: 2.5 GIG of data). ALL items are Informatica MAP items, and Informatica Objects - none are outside the map. Also remember, this applies to PowerMart /PowerCenter (4.5x, 4.6x, / 1.5x, 1.6x) - other versions have NOT been tested. The order of these items is not relevant to speed. Each one has it's own impact on the overall performance. Again, throughput is also gauged by the number of objects constructed within a map/maplet. Sometimes it's better to sacrifice a little readability, for a little speed. It's the old paradigm, weighing readability and maintainability (true modularity) against raw speed. Make sure the client agrees with the approach, or that the data sets are large enough to warrant this type of tuning. BE AWARE: The following tuning tips range from "minor" cleanup to "last resort" types of things - only when data sets get very large, should these items be addressed, otherwise, start with the BASIC tuning list above, then work your way in to these suggestions. To understand the intermediate section, you'll need to review this tips. 1. Filter Expressions - try to evaluate them in a port expression. Try to create the filter (true/false) answer inside a port expression upstream. Complex filter expressions slow down the mapping. Again, expressions/conditions operate fastest in an Expression Object with an output port for the result. Turns out - the longer the expression, or the more complex - the more severe the speed degradation. Place the actual expression (complex or not) in an EXPRESSION OBJECT upstream from the filter. Compute a single numerical flag: 1 for true, 0 for false as an output port. Pump this in to the filter - you should see the maximum performance ability with this configuration. 2. Remove all "DEFAULT" value expressions where possible. Having a default value - even the "ERROR(xxx)" command slows down the session. It causes an unnecessary evaluation of values for every data element in the map. The only time you want to use "DEFAULT value is when you have to provide a default value for a specific port. There is another method: placing a variable with an IIF(xxxx, DEFAULT VALUE, xxxx) condition within an expression. This will always be faster (if assigned to an output port) than a default value. 3. Variable Ports are "slower" than Output Expressions. Whenever possible, use output expressions instead of variable ports. The variables are good for "static - and state driven" but do slow down the processing time - as they are allocated/reallocated each pass of a row through the expression object. 4. Datatype conversion - perform it in a port expression. Simply mapping a string to an integer, or an integer to a string will perform the conversion, however it will be slower than creating an output port with an expression like: to_integer(xxxx) and mapping an integer to an integer. It's because PMServer is left to decide if the conversion can be done mid-stream which seems to slow things down. 5. Unused Ports. Surprisingly, unused output ports have no affect on performance. This is a good thing. However in general it is good practice to remove any unused ports in the mapping, including variables. Unfortunately - there is no "quick" method for identifying unused ports. 6. String Functions. String functions definitely have an impact on performance. Particularly those that change the length of a string (substring, ltrim, rtrim, etc..). These functions slow the map down considerably, the operations behind each string function are expensive (de-allocate, and re-allocate memory within a READER block in the session). String functions are a necessary and important part of ETL, we do not recommend removing their use completely, only try to limit them to necessary operations. One of the ways we advocate tuning these, is to use "varchar/varchar2" data types in your database sources, or to use delimited strings in source flat files (as much as possible). This will help reduce the need for "trimming" input. If your sources are in a database, perform the LTRIM/RTRIM functions on the data coming in from a database SQL statement, this will be much faster than operationally performing it mid-stream. 7. IIF Conditionals are costly. When possible - arrange the logic to minimize the use of IIF conditionals. This is not particular to Informatica, it is costly in ANY programming language. It introduces "decisions" within the tool, it also introduces multiple code paths across the logic (thus increasing complexity). Therefore - when possible, avoid utilizing an IIF conditional - again, the only possibility here might be (for example) an ORACLE DECODE function applied to a SQL source. 8. Sequence Generators slow down mappings. Unfortunately there is no "fast" and easy way to create sequence generators. The cost is not that high for using a sequence generator inside of Informatica, particularly if you are caching values (cache at around 2000) - seems to be the suite spot. However - if at all avoidable, this is one "card" up a sleve that can be played. If you don't absolutely need the sequence number in the map for calculation reasons, and you are utilizing Oracle, then let SQL*Loader create the sequence generator for all Insert Rows. If you're using Sybase, don't specify the Identity column as a target - let the Sybase Server generate the column. Also - try to avoid "reusable" sequence generators - they tend to slow the session down further, even with cached values. 9. Test Expressions slow down sessions. Expressions such as: IS_SPACES tend slow down the mappings, this is a data validation expression which has to run through the entire string to determine if it is spaces, much the same as IS_NUMBER has to validate an entire string. These expressions (if at all avoidable) should be removed in cases where it is not necessary to "test" prior to conversion. Be aware however, that direct conversion without testing (conversion of an invalid value) will kill the transformation. If you absolutely need a test expression for numerics, try this: IIF( * 1 >= 0,,NULL) preferably you don't care if it's zero. An alpha in this expression should return a NULL to the computation. Yes - the IIF condition is slightly faster than the IS_NUMBER - because IS_NUMBER parses the entire string, where the multiplication operator is the actual speed gain. 10. Reduce Number of OBJETS in a map. Frequently, the idea of these tools is to make the "data translation map" as easy as possible. All to often, that means creating "an" (1) expression for each throughput/translation (taking it to an extreme of course). Each object adds computational overhead to the session and timings may suffer. Sometimes if performance is an issue / goal, you can integrate several expressions in to one expression object, thus reducing the "object" overhead. In doing so - you could speed up the map. 11. Update Expressions - Session set to Update Else Insert. If you have this switch turned on - it will definitely slow the session down - Informatica performs 2 operations for each row: update (w/PK), then if it returns a ZERO rows updated, performs an insert. The way to speed this up is to "know" ahead of time if you need to issue a DD_UPDATE or DD_INSERT inside the mapping, then tell the update strategy what to do. After which you can change the session setting to: INSERT and UPDATE AS UPDATE or UPDATE AS INSERT. 12. Multiple Targets are too slow. Frequently maps are generated with multiple targets, and sometimes multiple sources. This (despite first appearances) can really burn up time. If the architecture permits change, and the users support re-work, then try to change the architecture -> 1 map per target is the general rule of thumb. Once reaching one map per target, the tuning get's easier. Sometimes it helps to reduce it to 1 source and 1 target per map. But - if the architecture allows more modularization 1 map per target usually does the trick. Going further, you could break it up: 1 map per target per operation (such as insert vs update). In doing this, it will provide a few more cards to the deck with which you can "tune" the session, as well as the target table itself. Going this route also introduces parallel operations. For further info on this topic, see my architecture presentations on Staging Tables, and 3rd normal form architecture (Corporate Data Warehouse Slides). 13. Slow Sources - Flat Files. If you've got slow sources, and these sources are flat files, you can look at some of the following possibilities. If the sources reside on a different machine, and you've opened a named pipe to get them across the network - then you've opened (potentially) a can of worms. You've introduced the network speed as a variable on the speed of the flat file source. Try to compress the source file, FTP PUT it on the local machine (local to PMServer), decompress it, then utilize it as a source. If you're reaching across the network to a relational table - and the session is pulling many many rows (over 10,000) then the source system itself may be slow. You may be better off using a source system extract program to dump it to file first, then follow the above instructions. However, there is something your SA's and Network Ops folks could do (if necessary) - this is covered in detail in the advanced section. They could backbone the two servers together with a dedicated network line (no hubs, routers, or other items in between the two machines). At the very least, they could put the two machines on the same sub-net. Now, if your file is local to PMServer but is still slow, examine the location of the file (which device is it on). If it's not on an INTERNAL DISK then it will be slower than if it were on an internal disk (C drive for you folks on NT). This doesn't mean a unix file LINK exists locally, and the file is remote - it means the actual file is local. 14. Too Many Aggregators. If your map has more than 1 aggregator, chances are the session will run very very slowly - unless the CACHE directory is extremely fast, and your drive seek/access times are very high. Even still, placing aggregators end-to-end in mappings will slow the session down by factors of at least 2. This is because of all the I/O activity being a bottleneck in Informatica. What needs to be known here is that Informatica's products: PM / PC up through 4.7x are NOT built for parallel processing. In other words, the internal core doesn't put the aggregators on threads, nor does it put the I/O on threads - therefore being a single strung process it becomes easy for a part of the session/map to become a "blocked" process by I/O factors. For I/O contention and resource monitoring, please see the database/datawarehouse tuning guide. 15. Maplets containing Aggregators. Maplets are a good source for replicating data logic. But just because an aggregator is in a maplet doesn't mean it won't affect the mapping. The reason maplets don't affect speed of the mappings, is they are treated as a part of the mapping once the session starts - in other words, if you have an aggregator in a maplet, followed by another aggregator in a mapping you will still have the problem mentioned above in #14. Reduce the number of aggregators in the entire mapping (included maplets) to 1 if possible. If necessary, split the map up in to several different maps, use intermediate tables in the database if required to achieve processing goals. 16. Eliminate "too many lookups". What happens and why? Well - with too many lookups, your cache is eaten in memory - particularly on the 1.6 / 4.6 products. The end result is there is no memory left for the sessions to run in. The DTM reader/writer/transformer threads are not left with enough memory to be able to run efficiently. PC 1.7, PM 4.7 solve some of these problems by caching some of these lookups out to disk when the cache is full. But you still end up with contention - in this case, with too many lookups, you're trading in Memory Contention for Disk Contention. The memory contention might be worse than the disk contention, because the system OS end's up thrashing (swapping in and out of TEMP/SWAP disk space) with small block sizes to try to locate "find" your lookup row, and as the row goes from lookup to lookup, the swapping / thrashing get's worse. 17. Lookups & Aggregators Fight. The lookups and the aggregators fight for memory space as discussed above. Each requires Index Cache, and Data Cache and they "share" the same HEAP segments inside the core. See Memory Layout document for more information. Particularly in the 4.6 / 1.6 products and prior - these memory areas become critical, and when dealing with many many rows - the session is almost certain to cause the server to "thrash" memory in and out of the OS Swap space. If possible, separate the maps - perform the lookups in the first section of the maps, position the data in an intermediate target table - then a second map reads the target table and performs the aggregation (also provides the option for a group by to be done within the database)... Another speed improvement... INFORMATICA ADVANCED TUNING GUIDELINES The following numbered items are for advanced level tuning. Please proceed cautiously, one step at a time. Do not attempt to follow these guidelines if you haven't already made it through all the basic and intermediate guidelines first. These guidelines may require a level of expertise which involves System Administrators, Database Administrators, and Network Operations folks. Please be patient. The most important aspect of advanced tuning is to be able to pinpoint specific bottlenecks, then have the funding to address them. As usual - these advanced tuning guidelines come last, and are pointed at suggestions for the system. There are other advanced tuning guidelines available for Data Warehousing Tuning. You can refer to those for questions surrounding your hardware / software resources. 1. Break the mappings out. 1 per target. If necessary,1 per source per target. Why does this work? Well - eliminating multiple targets in a single mapping can greatly increase speed... Basically it's like this: one session per map/target. Each session establishes it's own database connection. Because of the unique database connection, the DBMS server can now handle the insert/update/delete requests in parallel against multiple targets. It also helps to allow each session to be specified for it's intended purpose (no longer mixing a data driven session with INSERTS only to a single target). Each session can then be placed in to a batch marked "CONCURRENT" if preferences allow. Once this is done, parallelism of mappings and sessions become obvious. A study of parallel processing has shown again and again, that the operations can be completed sometimes in half the time of their original counterparts merely by streaming them at the same time. With multiple targets in the same mapping, you're telling a single database connection to handle multiply diverse database statements - sometimes hitting this target, other times hitting that target. Think - in this situation it's extremely difficult for Informatica (or any other tool for that matter) to build BULK operations... even though "bulk" is specified in the session. Remember that "BULK" means this is your preference, and that the tool will revert to NORMAL load if it can't provide a BULK operation on a series of consecutive rows. Obviously, data driven then forces the tool down several other layers of internal code before the data actually can reach the database. 2. Develop maplets for complex business logic. It appears as if Maplets do NOT cause any performance hindrance by themselves. Extensive use of maplets means better, more manageable business logic. The maplets allow you to better break the mappings out. 3. Keep the mappings as simple as possible. Bury complex logic (if you must) in to a maplet. If you can avoid complex logic all together - then that would be the key. The old rule of thumb applies here (common sense) the straighter the path between two points, the shorter the distance... Translated as: the shorter the distance between the source qualifier and the target - the faster the data loads. 4. Remember the TIMING is affected by READER/TRANSFORMER/WRITER threads. With complex mappings, don't forget that each ELEMENT (field) must be weighed - in this light a firm understanding of how to read performance statistics generated by Informatica becomes important. In other words - if the reader is slow, then the rest of the threads suffer, if the writer is slow, same effect. A pipe is only as big as it's smallest diameter.... A chain is only as strong as it's weakest link. Sorry for the metaphors, but it should make sense. 5. Change Network Packet Size (for Sybase, MS-SQL Server & Oracle users). Maximum network packet size is a Database Wide Setting, which is usually defaulted at 512 bytes or 1024 bytes. Setting the maximum database packet size doesn't necessarily hurt any of the other users, it does however allow the Informatica database setting to make use of the larger packet sizes - thus transfer more data in a single packet faster. The typical 'best' settings are between 10k and 20k. In Oracle: you'll need to adjust the Listener.ORA and TNSNames.ORA files. Include the parameters: SDU, and TDU. SDU = Service Layer Data Buffer Size (in bytes), TDU = Transport Layer Data Buffer Size (in bytes). The SDU and TDU should be set equally. See the Informatica FAQ page for more information on setting these up. 6. Change to IPC Database Connection for Local Oracle Database. If PMServer and Oracle are running on the same server, use an IPC connection instead of a TCP/IP connection. Change the protocol in the TNSNames.ORA and Listener.ORA files, and restart the listener on the server. Be careful - this protocol can only be used locally, however the speed increases from using Inter Process Communication can be between 2x and 6x. IPC is utilized by Oracle, but is defined as a Unix System 5 standard specification. You can find more information on IPC by reading about in in Unix System 5 manuals. 7. Change Database Priorities for the PMServer Database User. Prioritizing the database login that any of the connections use (setup in Server Manager) can assist in changing the priority given to the Informatica executing tasks. These tasks when logged in to the database then can over-ride others. Sizing memory for these tasks (in shared global areas, and server settings) must be done if priorities are to be changed. If BCP or SQL*Loader or some other bulk-load facility is utilized, these priorities must also be set. This can greatly improve performance. Again, it's only suggested as a last resort method, and doesn't substitute for tuning the database, or the mapping processes. It should only be utilized when all other methods have been exhausted (tuned). Keep in mind that this should only be relegated to the production machines, and only in certain instances where the Load cycle that Informatica is utilizing is NOT impeding other users. 8. Change the UNIX User Priority. In order to gain speed, the Informatica Unix User must be given a higher priority. The Unix SA should understand what it takes to rank the Unix logins, and grant priorities to particular tasks. Or - simply have the pmserver executed under a super user (SU) command, this will take care of reprioritizing Informatica's core process. This should only be used as a last resort - once all other tuning avenues have been exhausted, or if you have a dedicated Unix machine on which Informatica is running. 9. Try not to load across the network. If at all possible, try to co-locate PMServer executable with a local database. Not having the database local means: 1) the repository is across the network (slow), 2) the sources / targets are across the network, also potentially slow. If you have to load across the network, at least try to localize the repository on a database instance on the same machine as the server. The other thing is: try to co-locate the two machines (pmserver and Target database server) on the same sub-net, even the same hub if possible. This eliminates unnecessary routing of packets all over the network. Having a localized database also allows you to setup a target table locally - which you can then "dump" following a load, ftp to the target server, and bulk-load in to the target table. This works extremely well for situations where append or complete refresh is taking place. 10. Set Session Shared Memory Settings between 12MB and 24MB. Typically I've seen folks attempt to assign a session large heaps of memory (in hopes it will increase speed). All it tends to do is slow down the processing. See the memory layout document for further information on how this affects Informatica and it's memory handling, and why simply giving it more memory doesn't necessarily provide speed. 11. Set Shared Buffer Block Size around 128k. Again, something that's covered in the memory layout document. This seems to be a "sweet spot" for handling blocks of rows in side the Informatica process. 12. MEMORY SETTINGS: The settings above are for an average configured machine, any machine with less than 10 GIG's of RAM should abide by the above settings. If you've got 12+ GIG's, and you're running only 1 to 3 sessions concurrently, go ahead and specify the Session Shared Memory size at 1 or 2 GIG's. Keep in mind that the Shared Buffer Block Size should be set in relative size to the Shared Memory Setting. If you set a Shared Mem to 124 MB, set the Buffer Block Size to 12MB, keep them in relative sizes. If you don't - the result will be more memory "handling" going on in the background, so less actual work will be done by Informatica. Also - this holds true for the simpler mappings. The more complex the mapping, the less likely you are to see a gain by increasing either buffer block size, or shared memory settings - because Informatica potentially has to process cells (ports/fields/values) inside of a huge memory block; thus resulting in a potential re-allocation of the whole block. 13. Use SNAPSHOTS with your Database. If you have dedicated lines, DS3/T1, etc... between servers, use a snapshot or Advanced Replication to get data out of the source systems and in to a staging table (duplicate of the source). Then schedule the snapshot before running processes. The RDBMS servers are built for this kind of data transfer - and have optimizations built in to the core to transfer data incrementally, or as a whole refresh. It may be to your advantage. Particularly if your sources contain 13 Million + rows. Place Informatica processes to read from the snapshot, at that point you can index any way you like - and increase the throughput speed without affecting the source systems. Yes - Snapshots only work if your sources are homogeneous to your targets (on the same type of system). 14. INCREASE THE DISK SPEED. One of the most common fallacies is that a Data Warehouse RDBMS needs only 2 controllers, and 13 disks to survive. This is fine if you're running less than 5 Million Rows total through your system, or your load window exceeds 5 hours. I recommend at least 4 to 6 controllers, and at least 50 disks - set on a Raid 0+1 array, spinning at 7200 RPM or better. If it's necessary, plunk the money down and go get an EMC device. You should see a significant increase in performance after installing or upgrading to such a configuration. 15. Switch to Raid 0+1. Raid Level 5 is great for redundancy, horrible for Data Warehouse performance, particularly on bulk loads. Raid 0+1 is the preferred method for data warehouses out there, and most folks find that the replication is just as safe as a Raid 5, particularly since the Hardware is now nearly all hot-swappable, and the software to manage this has improved greatly. 16. Upgrade your Hardware. On your production box, if you want Gigabytes per second throughput, or you want to create 10 indexes in 4 hours on 34 million rows, then add CPU power, RAM, and the Disk modifications discussed above. A 4 CPU machine just won't cut the mustard today for this size of operation. I recommend a minimum of 8 CPU's as a starter box, and increase to 12 as necessary. Again, this is for huge Data Warehousing systems - GIG's per hour/MB per Hour. A box with 4 CPU's is great for development, or for smaller systems (totalling less than 5 Million rows in the warehouse). However, keep in mind that Bus Speed is also a huge factor here. I've heard of a 4 CPU Dec-Alpha system outperforming a 6 CPU system... So what's the bottom line? Disk RPM's, Bus Speed, RAM, and # of CPU's. I'd say potentially in that order. Both Oracle and Sybase perform extremely well when given 6+ CPU's and 8 or 12 GIG's RAM setup on an EMC device at 7200 RPM with minimum of 4 controllers. Sorting - performance issues You can improve Aggregator transformation performance by using the Sorted Input option. When the Sorted Input option is selected, the Informatica Server assumes all data is sorted by group. As the Informatica Server reads rows for a group, it performs aggregate calculations as it reads. When necessary, it stores group information in memory. To use the Sorted Input option, you must pass sorted data to the Aggregator transformation. You can gain added performance with sorted ports when you partition the session. When Sorted Input is not selected, the Informatica Server performs aggregate calculations as it reads. However, since data is not sorted, the Informatica Server stores data for each group until it reads the entire source to ensure all aggregate calculations are accurate. For example, one Aggregator has the STORE_ID and ITEM Group By ports, with the Sorted Input option selected. When you pass the following data through the Aggregator, the Informatica Server performs an aggregation for the three records in the 101/battery group as soon as it finds the new group, 201/battery: STORE_ID ITEM QTY PRICE 101 'battery' 3 2.99 101 'battery' 1 3.19 101 'battery' 2 2.59 201 'battery' 4 1.59 201 'battery' 1 1.99 If you use the Sorted Input option and do not presort data correctly, the session fails. Sorted Input Conditions Do not use the Sorted Input option if any of the following conditions are true: * The aggregate expression uses nested aggregate functions. * The session uses incremental aggregation. * Input data is data-driven. You choose to treat source data as data driven in the session properties, or the Update Strategy transformation appears before the Aggregator transformation in the mapping. * The mapping is upgraded from PowerMart 3.5. If you use the Sorted Input option under these circumstances, the Informatica Server reverts to default aggregate behavior, reading all values before performing aggregate calculations. Pre-Sorting Data To use the Sorted Input option, you pass sorted data through the Aggregator. Data must be sorted as follows: * By the Aggregator group by ports, in the order they appear in the Aggregator transformation. * Using the same sort order configured for the session. If data is not in strict ascending or descending order based on the session sort order, the Informatica Server fails the session. For example, if you configure a session to use a French sort order, data passing into the Aggregator transformation must be sorted using the French sort order. If the session uses file sources, you can use an external utility to sort file data before starting the session. If the session uses relational sources, you can use the Number of Sorted Ports option in the Source Qualifier transformation to sort group by columns in the source database. Group By columns must be in the exact same order in both the Aggregator and Source Qualifier transformations Indexes - Make sure indexes are in place and tables have been analyzed Might be able to use index hints in source qualifier 3.I want to add some new columns in source table. I used this source in lot of mappings. If i add new colums in my source table it should be updated automatically in all mappings where i used this particular source. -> If you add the columns source table it will get reflected in all the mapping where that source table has been used. But if you want to use that added columns in mapping you

source: http://etl-information.blogspot.com/search/label/Informatica