Saturday, June 11, 2011

Hp interview Questions and answers (11-06-2011)


1.what is meant by inlineview?
Ans: inline view is a SELECT statement in the FROM-clause of another SELECT statement.
In-line views are commonly used simplify complex queries by removing join operations 
and condensing several separate queries into a single query.


2.what is conformed dimension?
Ans:a dimension table can be shared by more than one facttable is known conformed dmension.
ex:date dimension.


3.what is factlessfact table?
Ans:factless fact table captures many to many relationship b/w dimensions,and it dosen't contain any numeric values.
ex:studance attandence tracking.
store that did not sell a product over aperiod of time.


4.i have 3 flatfiles with same stracture how to join without using joiner transformation?
Ans:by using file list concept with file type as indirect .


5.how to delete the duplicate records in unix?
Ans:sort filename.txt|uniq -u


6.what is surrogatedkey?
Ans:surrogatedkey is a systemgenarated sequence number to be used as a primary key.


7.what is diff b/w surrogatekey and primarykey?
Ans:   surrogatekey                             primarykey
    1.artificialkey                                   1.naturalkey
    2.generated by system sequance       2.generated by database 
       number


8.what is subqurey & corelated subquarey?
Ans:subquarey:-A query nested inside a SELECT statement is known as a subquery 
and is an alternative to complex join statements.
CorelatedsubQuarey:-It is a sub-query (a query nested inside another query) 
that uses values from the outer query in its WHERE clause. The sub-query 
is evaluated once for each row processed by the outer query.


9.how to eliminate duplicate records in informatica and in sql?
Ans:in informatica by using distinct property in sourcequalifier or by using sortet t/r.
in sql delete from emp where rowid not in(select max(rowid) from emp group by empno);

Spaninfotech Interview Questions&Answers(11-06-2011)

1.select max(sal),deptno from emp where max(sal)>2000 group by deptno this Query is right or wrong?
Ans:select max(sal),deptno from emp group by deptno having max(sal)>2000 it is the right one.


2. what is the diff b/w where and having clause?
Ans:where can restrict each row or record
having clause restrict group of records.


3.Can we use transaction control transformation on a flatfile target?
Ans:we can't use,Transaction Control transformations connected to any target other than relational, XML, or dynamic MQSeries targets are ineffective for those targets.


4.which one given better performnce static or dynamic lkp?
Ans:Performance wise static lookup is better than dynamic lkp,dynamic Lookup is that they are slow as the caches are updated frequently based on the transaction posted in the database.

5. when we use dynamic lookup?
Ans:if the table contains more no.of duplicate records then we use dynamc lkp to eliminate duplicates.

6.i have 10 records in my source table  i can take  filter transformation i can give the condition true then how many records loaded into the target tale ?
ANs :10 records,every active transaformation by default act as passive transformation.








Friday, June 10, 2011

Types of Fact Tables


Types of Facts

There are three types of facts:
  • Additive: Additive facts are facts that can be summed up through all of the dimensions in the fact table.
  • Semi-Additive: Semi-additive facts are facts that can be summed up for some of the dimensions in the fact table, but not the others.
  • Non-Additive: Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table.
Let us use examples to illustrate each of the three types of facts. The first example assumes that we are a retailer, and we have a fact table with the following columns:


Date
Store
Product
Sales_Amount
The purpose of this table is to record the sales amount for each product in each store on a daily basis. Sales_Amount is the fact. In this case, Sales_Amount is an additive fact, because you can sum up this fact along any of the three dimensions present in the fact table -- date, store, and product. For example, the sum of Sales_Amount for all 7 days in a week represent the total sales amount for that week.
Say we are a bank with the following fact table:

Date
Account
Current_Balance
Profit_Margin
The purpose of this table is to record the current balance for each account at the end of each day, as well as the profit margin for each account for each day. Current_Balance and Profit_Margin are the facts. Current_Balance is a semi-additive fact, as it makes sense to add them up for all accounts (what's the total current balance for all accounts in the bank?), but it does not make sense to add them up through time (adding up all current balances for a given account for each day of the month does not give us any useful information). Profit_Margin is a non-additive fact, for it does not make sense to add them up for the account level or the day level.


Types of Fact Tables

Based on the above classifications, there are two types of fact tables:
  • Cumulative: This type of fact table describes what has happened over a period of time. For example, this fact table may describe the total sales by product by store by day. The facts for this type of fact tables are mostly additive facts. The first example presented here is a cumulative fact table.
  • Snapshot: This type of fact table describes the state of things in a particular instance of time, and usually includes more semi-additive and non-additive facts. The second example presented here is a snapshot fact table. 

Monday, June 6, 2011

Accenture telephonic Interview Questions (6-06-2011)

1.whandot is the diff b/w 7.x/8.x?

2.what is the sql transformation and explain?

3.what is domain and node?

4.what is the use of associated port in lkp transformation?

5.what is the diff b/w sql source qualifier and flatfile sourcequalifier?

6.I have 1000 flatfiles i had already loaded 700 flatfiles how to you load remaining 300 flatfiles?

7.i have to load 1 lakh of records to the production system in b/w the session
fail what u can do(u can't restart,and also u r not assiging session to a recovery mode)?

8.how can u migrate the code from developed environment to productionenvironment?

9.which type of scd2 can u use in ur project?

10.if ur source data contain no duplictes and also u don't want to perform incremental agrregation
and no need of sorting the data then what is the use of sorter t/r before aggregator?

11.my parameter file dosen't contain any value then how the integration service executed the parameter file?

12.if ur using mapping variable and run the session if the session fails then what is the value in repository?

13. what are the reusable tasks ur used?

TCS(white field,bangalore) Interview Questions (4-06-2011)

1.explain  ur project?

2.how to display duplicate records in a table by using sql query?
ex:100 aaa
   100 aaa
   100 aaa
   101 bbb
   102 ccc

i want to dispaly like this 100 aaa
                                     100 aaa
                                     100 aaa

3.explain scd type2 i have the data d1 1 mum
                                                     d2 1 kol
                                                     d3 1 hyd
how it will be inserted?

4.what is mapplet and expalin where u can use mapplet in realtime?

5.emp
  boss
  sales transaction
  hra a/c

in which what are the fact and dimensional tables?

6.explain mapping parameters and variables?

7.what is factless fact table and explain where u can use in realtime?

8.diff b/w in and exist?(sub quaries)

9.i have 100 records in my target table and next day i want to insert 5 records,
and i want to update 10 records then what is tot no.of records in my target table?(by using scd2)

10.diff b/w connected and unconnected lkp?

11.by using which transformation u can capture the rejected records and where it will be stored?

12.what is the shortcut and reusable transformation?

13.in my table 1000 records are there i want to get 100 records what is the sqlquery?

14.diff b/w sourcequalifier and filter?

15.what is abort and stop where we can use abort?

16.what are types of tracing levels and what is the verbose initialization?

17. diff b/w union and joiner?

18. what are the reusable and nonreusabletasks?

Thursday, June 2, 2011

CDM Project Configaration


CDM Project Configaration

Database design :

Temparary Database(oracle)

username : scott
password : tiger
(storing the metadata for src files)

source Database(oracle)

username : source_stage
password : source


Target Database(oracle) -DWH

username : dim_target
password : target


Procedure  for creating and grant the previliges to the users(Schemas)

sql >create user source_stage identified by source;

sql >create user dim_target identified by target;

sql >grant dba to source_stage;

sql >grant dba to dim_target;

1. procedure for creating source file  structure in scott (schema)
 
   connect to the scott schema (username = scott and password = tiger)
   copy and paste the source table structures in the scott schema.
    the source table structure is available on the below document file
    Rafi\CDM_Project\FILE_Source_Data\Table_Structure.doc


2. procedure for creating source file  structure in source_stage (schema)
  
    connect to the source_stage schema (username = source_stage and password = source)
    copy and paste the stage table structures in the source_stage schema.
    the stage table structure is available on the below document file
    Rafi\CDM_Project\FILE_Source_Data\Table_Structure.doc


3. procedure for converting source file structure (oracle) to flatfile
   
      import all tables from scott schema (source analyser - in Informatica)
     change all the tables, database type as oracle to database type as flatfiles.


4. create a Date table in the source_stage  (schema)
 
    Create Table T_DATE
     (
        FULL_DATE Date
      );
.

  procedure for loading the dates into T_DATE table (source_stage schema)


    CREATE  OR  REPLACE procedure load_date
    as
    start_date date := to_date('01-01-2007','dd-mm-yyyy');
    end_date date := to_date('31-12-2012','dd-mm-yyyy');
    begin
    for i in 1..2500 loop
    insert into t_date values(start_date);
    start_date:= start_date + 1;
    end loop;
    end;


5. Start maping for passing the soruce file data (scott schema)  into  stage tables (source_stage schema)
   
     as follows
              scott                                      stage
     account_src  (flat file)                       t_account (oracle)
     product_src  (flat file)                       t_product (oracle)
     market_src  (flat file)                        t_market (oracle)
     employee_src  (flat file)                     t_employee(oracle)
     client_order (flat file)                        client_order (oracle)
     client_allocation (flat file)                  client_allocation (oracle)
     client_execution (flat file)                  client_execution (oracle)
                                                                 t_date(oracle)

6. procedure for creating Dimension table structure in dim_target (schema) - DWH
   connect to the dim_target schema (username = dim_target and password = target)
   copy and paste the Dimension table structures in the dim_target schema.
    the Dimension table structure is available on the below document file
    Rafi\CDM_Project\FILE_Source_Data\Dimension Table Structure.doc

7. Start maping for passing the stage table data (source_stage)  into Dimension tables (dim_target schema)
     as follows

                 stage                              target (DWH)
    a)  t_account (oracle)                        dm_t_account_dim (oracle)
                                                              dm_t_account_dim_except (oracle)

    b)  t_product (oracle)                        dm_t_product_dim (oracle)
                                                               dm_t_product_dim_excep (oracle)

    c)  t_market (oracle)                         dm_ t_market_dim (oracle)
                                                              dm_ t_market_dim_excep (oracle)

    d)  t_employee(oracle)                       dm_t_employee_dim (oracle)
                                                              dm_t_employee_dim (oracle)


a and d  are SCD Type - 2
b and c are SCD  Type - 1   
    
8. FACT table loading (In progress)




Wednesday, June 1, 2011

Mapping Parameters and Variables Overview

Mapping Parameters and Variables Overview

In the Designer, use mapping parameters and variables to make mappings more flexible. Mapping parameters and variables represent values in mappings and mapplets. If you declare mapping parameters and variables in a mapping, you can reuse a mapping by altering the parameter and variable values of the mapping in the session. This can reduce the overhead of creating multiple mappings when only certain attributes of a mapping need to be changed.
When you use a mapping parameter or variable in a mapping, first you declare the mapping parameter or variable for use in each mapplet or mapping. Then, you define a value for the mapping parameter or variable before you run the session.
Use mapping parameters and variables in a mapping to incrementally extract data. Use mapping parameters or variables in the source filter of a Source Qualifier transformation to determine the beginning timestamp and end timestamp for incrementally extracting data.
For example, you can create a user-defined mapping variable $$LastUpdateDateTime that saves the timestamp of the last row the Integration Service read in the previous session. Use $$LastUpdateDateTime for the beginning timestamp and the built-in variable $$$SessStartTime for the end timestamp in the source filter. Use the following filter to incrementally extract data based on the SALES.sales_datetime column in the source:
SALES.sales_datetime > TO_DATE (‘$$LastUpdateDateTime’) AND SALES.sales_datetime < TO_DATE (‘$$$SessStartTime’)

Mapping Parameters

A mapping parameter represents a constant value that you can define before running a session. A mapping parameter retains the same value throughout the entire session.
When you use a mapping parameter, you declare and use the parameter in a mapping or mapplet. Then define the value of the parameter in a parameter file. The Integration Service evaluates all references to the parameter to that value.
For example, you want to use the same session to extract transaction records for each of the customers individually. Instead of creating a separate mapping for each customer account, you can create a mapping parameter to represent a single customer account. Then use the parameter in a source filter to extract only data for that customer account. Before running the session, you enter the value of the parameter in the parameter file.
To reuse the same mapping to extract records for other customer accounts, you can enter a new value for the parameter in the parameter file and run the session. Or, you can create a parameter file for each customer account and start the session with a different parameter file each time using pmcmd. By using a parameter file, you reduce the overhead of creating multiple mappings and sessions to extract transaction records for different customer accounts.
When you want to use the same value for a mapping parameter each time you run the session, use the same parameter file for each session run. When you want to change the value of a mapping parameter between sessions you can perform one of the following tasks:
*
Update the parameter file between sessions.
*
Create a different parameter file and configure the session to use the new file.
*
Remove the parameter file from the session properties. The Integration Service uses the parameter value in the pre-session variable assignment. If there is no pre-session variable assignment, the Integration Service uses the configured initial value of the parameter in the mapping.

Mapping Variables

Unlike a mapping parameter, a mapping variable represents a value that can change through the session. The Integration Service saves the value of a mapping variable to the repository at the end of each successful session run and uses that value the next time you run the session.
When you use a mapping variable, you declare the variable in the mapping or mapplet, and then use a variable function in the mapping to change the value of the variable. At the beginning of a session, the Integration Service evaluates references to a variable to determine the start value. At the end of a successful session, the Integration Service saves the final value of the variable to the repository. The next time you run the session, the Integration Service evaluates references to the variable to the saved value. To override the saved value, define the start value of the variable in a parameter file or assign a value in the pre-session variable assignment in the session properties.
Use mapping variables to perform incremental reads of a source. For example, the customer accounts in the mapping parameter example above are numbered from 001 to 065, incremented by one. Instead of creating a mapping parameter, you can create a mapping variable with an initial value of 001. In the mapping, use a variable function to increase the variable value by one. The first time the Integration Service runs the session, it extracts the records for customer account 001. At the end of the session, it increments the variable by one and saves that value to the repository. The next time the Integration Service runs the session, it extracts the data for the next customer account, 002. It also increments the variable value so the next session extracts and looks up data for customer account 003.

Using Mapping Parameters and Variables

You can create mapping parameters and variables in the Mapping Designer or Mapplet Designer. Once created, mapping parameters and variables appear on the Variables tab of the Expression Editor. Use them in any expression in the mapplet or mapping. The Designer validates mapping parameters and variables in the Expression Editor of mapplets and mappings.
Use mapping parameters and variables in a source qualifier in a mapplet or mapping. When you use mapping parameters and variables in a Source Qualifier transformation, the Designer expands them before passing the query to the source database for validation. This allows the source database to validate the query.
When you create a reusable transformation in the Transformation Developer, use any mapping parameter or variable. Since a reusable transformation is not contained within any mapplet or mapping, the Designer validates the usage of any mapping parameter or variable in the expressions of reusable transformation. When you use the reusable transformation in a mapplet or mapping, the Designer validates the expression again. If the parameter or variable is not defined in the mapplet or mapping, or if it is used incorrectly in the reusable transformation, the Designer logs an error when you validate the mapplet or mapping.
When the Designer validates a mapping variable in a reusable transformation, it treats the variable as an Integer datatype.
You cannot use mapping parameters and variables interchangeably between a mapplet and a mapping. Mapping parameters and variables declared for a mapping cannot be used within a mapplet. Similarly, you cannot use a mapping parameter or variable declared for a mapplet in a mapping.

Initial and Default Values

When you declare a mapping parameter or variable in a mapping or a mapplet, you can enter an initial value. The Integration Service uses the configured initial value for a mapping parameter when the parameter is not defined in the parameter file. Similarly, the Integration Service uses the configured initial value for a mapping variable when the variable value is not defined in the parameter file, and there is no saved variable value in the repository.
When the Integration Service needs an initial value, and you did not declare an initial value for the parameter or variable, the Integration Service uses a default value based on the datatype of the parameter or variable.
Table 7-1 lists the default values the Integration Service uses for different types of data:
Table 7-1. Default Values for Mapping Parameters and Variables Based on Datatype
Data
Default Value
String
Empty string.
Numeric
0
Datetime
1/1/1753 A.D. or 1/1/1 when the Integration Service is configured for compatibility with 4.0.
For example, you create a new mapping using an Integer mapping variable, $$MiscellaneousExpenses. You do not configure an initial value for the variable or define it in a parameter file. The first time you run a session with the mapping, the Integration Service uses the default value for numeric datatypes, 0.
Or, if you create a mapping parameter $$MiscellaneousCosts to represent additional expenses that might become relevant in the future, but do not currently exist in source data. You configure the parameter for a Decimal datatype. Since you want $$MiscellaneousCosts to evaluate to 0 when you do not have additional expenses, you set the initial value to 0.
As long as you do not define the parameter value in the parameter file, the Integration Service replaces $$MiscellaneousCosts with 0. When you want to include miscellaneous expenses in mapping calculations, set $$MiscellaneousCosts to that value in the parameter file.

Using String Parameters and Variables

When you enter mapping parameters and variables of a string datatype in a Source Qualifier transformation, use a string identifier appropriate for the source database. When the Integration Service expands a parameter or variable in a Source Qualifier transformation, the Integration Service replaces it with its start value, and then passes the expanded query to the source database. Most databases require single quotation marks around string values.
When you enter string parameters or variables using the PowerCenter transformation language, do not use additional quotes. The Integration Service recognizes mapping parameter and variable naming syntax in the PowerCenter transformation language. For example, you might use a parameter named $$State in the filter for a Source Qualifier transformation to extract rows for a particular state:
STATE = ‘$$State’
During the session, the Integration Service replaces the parameter with a string. If $$State is defined as MD in the parameter file, the Integration Service replaces the parameter as follows:
STATE = ‘MD’
You can perform a similar filter in the Filter transformation using the PowerCenter transformation language as follows:
STATE = $$State
If you enclose the parameter in single quotes in the Filter transformation, the Integration Service reads it as the string literal “$$State” instead of replacing the parameter with “MD.”

Using Datetime Parameters and Variables

When you use a datetime parameter or variable in the Source Qualifier transformation, you might need to change the date format to the format used in the source.