Friday, 24 July 2015

ETL TESTING IMP DOCUMENT



ETL TESTING 

1)What is the difference between a ods and staging area

a)      The full form of ODS is Operational Data Store.ODS is a layer between the source and target databases..ODS is used to store the recent data.Staging layer is also a layer between the source and target databases..Staging layer is used for cleansing purpose and store the data periodically. 

2)Say in a source there are 1 lakh records and in Target there are 1 lakh records. Some records in a column having the mismatch value (.ie., Instead of displaying 1000 in a column, it is displaying as 100 in Target column) The source file is from an DB2 and, Oracle (different source) and, Target is an SQL server. So how to find it by a simple way?
a) Extract the data from the source and the target into excel files and do a lookup or perform an If function ( i.e. compare the 2 sets of data ).....

3) Lets suppose we have some 10,000 odd records in source system and when load them into target how do we ensure that all 10,000 records that are loaded to target doesn't contain any garbage values.

How do we test it. We can't check every record as number of records are huge.


a)     Select count(*) From from both source table and Target table and compare the result.

4) What are active transformation / Passive transformations?


a)    Transformations can be active or passive. An active transformation can change the number of rows that pass through it, such as a Filter transformation that removes rows that do not meet the filter condition. A passive transformation does not change the number of rows that pass through it, such as an Expression transformation that performs a calculation on data and passes all rows through the transformationActive transformations Advanced External ProcedureAggregatorApplication Source QualifierFilterJoinerNormalizerRankRouterUpdate Strategy  Passive transformation ExpressionExternal ProcedureMaplet- InputLookupSequence generatorXML Source QualifierMaplet - Output  Cheers,Sithu

a) A ETL Tester primarily test source data extraction, business transformation logic and target table loading . There are so many tasks involved for doing the same , which are given below -

1. Stage table / SFS or MFS file created from source upstream system - below checks come under this :

a) Record count Check
b) Reconcile records with source data 
c) No Junk data loaded
d) Key or Mandatory Field not missing
e) duplicate data not 
f) 
Data typeDescription: http://images.intellitxt.com/ast/adTypes/icon1.png and size check 

2) Business transformation logic applied - below checks come under this :

a) Business data check like telephone no cant be more than 10 digit or character data
b) Record count check for active and passing transformation logic applied
c) Derived Field from the source data is proper
d) Check Data flow from stage to intermediate table
e) Surrogate key generation check if any

3. Target table loading from stage file or table after applying transformation - below check come under this 

a) Record count check from intermediate table or file to target table
b) Mandatory or key field data not missing or Null
c) Aggregate or derived value loaded in Fact table
d) Check view created based on target table
e) Truncate and load table check
f) CDC applied on incremental load table
g) dimension table check & history table check
h) Business rule validation on loaded table
i) Check reports based on loaded fact and dimension table
6) Find out the number of columns in flat file (or) In ETL testing if the src is flat file, then how will you verify the data count validation?
a) By using "WC" unix command we can find the count of a file 
b) Not always (in my experience, quite rarely in fact). Most often the flat file is just that, a flat file. If in UNIX, the wc command is great, in Windows, one could open the file in notepad and CTRL+END to the last row wherein one will see the row count, or, as stated below, use Excel. Or you could just have your ETL program count for you. Those methods vary based on which ETL software you are using (SSIS, Informatica, etc)
a)power center 9.5,power mart 7.0 is last
a) 1.To find faults in software 2.To purify that software has no defects. 3.To find perform problems. 4.To give cunt in software. select best option.
a) Data-Centric Testing: Data-centric testing revolves around testing quality of the data. The objective of the data-centric testing is to ensure valid and correct data is in the system. Following are the couple of reasons that cause the requirement of performing data-centric testing: ETL Processes/Data Movement: When you apply ETL processes on source database, and transform and load data in the target database System Migration/Upgrade: When you migrate your database from one database to another or you upgrade an existing system where the database is currently running.
10) ETL Testing in Informatica
As an ETL tester what are the things to test in Informatica tool. What are the types of testing has to do in informatica and the things to test in each type? If there is any documents on ETL testing

a)ETL Testing in Informatica:

1. First check the workflow exist in specified folder.
2. Run the workflow. If the workflow is success then check the target table is loaded on proper data else we can need analyze the session log for the root cause for the failure then discuss with the developer and we need to check email also when the workflow fails.
3. Need to validate the email configuration when workflow fails.

11) If LKP on target table is taken, can we update the rows without update strategy transformation?

a) Yes, by using dynamic lookup
b) If you even do dynamic lookup you should make use of update strategy to mark the records either to insert or update in the target using look-up cache (look-up will be cached on target).

your requirement is not so clear here whether to use dynamic lookup or session properties.

Note:When you create a mapping with a Lookup transformation that uses a dynamic lookup cache, you must use Update Strategy transformations to flag the rows for the target tables.

12) Clean Data before Loading

Why is it necessary to clean data before loading it into the Warehouse?
a)    Warehouse data is used as source data for data analysis and reporting. The data is organized into groups and categories (aggregation) and then summarized upon those groups (dimensions). These groups are based upon exactness. 
For example "house", "houses", and "home" would fall into groups because they are not exact. But logically, they are the same and should be of the same group. The process of data cleansing would correct this. This is only one example of data cleansing. 
The point is that if data is not cleansed, then the resulting reports and OLAP cubes will contain too many categories, making them hard to read. The results would also be skewed because factual data (totals, counts, etc) would be distributed across the good and bad categories. Once loaded into the
data warehouseDescription: http://images.intellitxt.com/ast/adTypes/icon1.png, it is very difficult, if not impossible to change.
b)   Cleaning is required as it helps in maintaining the uniforminty of the target data.

13) Informatica: if flat file name varies day by day ...

If we are using flat file in our loading, and flat file name change daily so how we handle this without changing file name manually daily? for example: like file name is changing depend on date so what should I do? pls help…

14) You can use Informatica File List option to get the dynamic file names. You also need to use shellDescription: http://images.intellitxt.com/ast/adTypes/icon1.pngscripts to get this done.
a) In Session level, we can select Source File Type as Indirect . When you select Indirect, the Integration Service finds the file list and reads each listed file when it runs the session. So inside the file list we can mention the file name that is changing frequently.

15) What are the different lookup methods used in Informatica?
a) Connected lookup transformation and un connected look up transformation

ETL PROCESS


2 Challenges of Data warehouse Testing
Data selection from multiple source systems and
analysis that follows pose great challenge.
Volume and the complexity of the data.
Inconsistent and redundant data in a data warehouse.
Inconsistent and Inaccurate reports.
Non-availability of History data.
3 Testing Methodology
Use of Traceability to enable full test coverage of
Business Requirements
In depth review of Test Cases
Manipulation of Test Data to ensure full test
coverage

Fig 1 Testing Methodology (V- Model)
Provision of appropriate tools to speed the process
of Test Execution & Evaluation
Regression Testing
4 Testing Types
The following are types of Testing performed for Data
warehousing projects.
1. Unit Testing.
2. Integration Testing.
3. Technical Shakedown Testing.
4. System Testing.
5. Operation readiness Testing
6. User Acceptance Testing.
4.1 Unit Testing
The objective of Unit testing involves testing of Business
transformation rules, error conditions, mapping fields at
staging and core levels.
Unit testing involves the following
1. Check the Mapping of fields present in staging
level.
2. Check for the duplication of values generated using
Sequence generator.
3. Check for the correctness of surrogate keys, which
uniquely identifies rows in database.
4. Check for Data type constraints of the fields
present in staging and core levels.
5. Check for the population of status and error
messages into target table.
6. Check for string columns are left and right trimmed.
7. Check every mapping needs to implement the process
abort mapplet which is invoked if the number of record read
from source is not equal to trailer count.
8. Check every object, transformation, source and
target need to have proper metadata. Check visually in data
warehouse designer tool if every transformation has a
meaningful description.
4.2 Integration Testing
The objective of Integration Testing is to ensure that
workflows are executed as scheduled with correct
dependency.
Integration testing involves the following
1. To check for the execution of workflows at the
following stages
Source to Staging A.
Staging A to Staging B.
Staging B to Core.
2. To check target tables are populated with correct
number of records.
3. Performance of the schedule is recorded and
analysis is performed
on the performance result.
4. To verify the dependencies among workflows between
source to staging, staging to staging and staging to core
is have been properly defined.
5. To Check for Error log messages in appropriate file.
6. To verify if the start jobs starts at pre-defined
starting time. Example if the start time for first job has
been configured to be at 10:00AM and the Control-M group
has been ordered at 7AM, the first job would not start in
Control-M until 10:00AM.
7. To check for restarting of Jobs in case of failures.
4.3 Technical Shakedown Test
Due to the complexity in integrating the various source
systems and tools, there are expected to be several
teething problems with the environments. A Technical
Shakedown Test will be conducted prior to commencing System
Testing, Stress & Performance, User Acceptance testing and
Operational Readiness Test to ensure the following points
are proven:
Hardware is in place and has been configured
correctly (including Informatica architecture, Source
system connectivity and Business Objects).
All software has been migrated to the testing
environments correctly.
All required connectivity between systems are in
place.
End-to-end transactions (both online and batch
transactions) have been executed and do not fall over.
4.4 System Testing
The objective of System Testing is to ensure that the
required business functions are implemented correctly. This
phase includes data verification which tests the quality of
data populated into target tables.
System Testing involves the following
1. To check the functionality of the system meets the
business specifications.
2. To check for the count of records in source table
and comparing with the number of records in the target
table followed by analysis of rejected records.
3. To check for end to end integration of systems and
connectivity of the infrastructure (e.g. hardware and
network configurations are correct),
4. To check all transactions, database updates and
data flows functions for accuracy.
5. To validate Business reports functionality.

Reporting functionality Ability to report data as required
by Business using Business Objects
Report Structure Since the universe and reports have
been migrated from previous version of Business Objects,
it
s necessary to ensure that the upgraded reports
replicate the structure/format and data requirements (until
and unless a change / enhancement has been documented in
Requirement Traceability Matrix / Functional Design
Document).
Enhancements Enhancements like reports
structure,
prompts ordering which were in scope of upgrade project
will be tested
Data Accuracy The data displayed in the reports / prompts
matches with the actual data in data mart.
Performance Ability of the system to perform certain
functions within a prescribed time.
That the system meets the stated performance criteria
according to agreed SLAs or specific non-functional
requirements.
Security That the required level of security access
is controlled and works properly, including domain
security, profile security, Data Security, UserID and
password control, and access procedures. That the security
system cannot be bypassed.
Usability That the system is useable as per specified
requirements
User Accessibility That specified type of access to
data is provided to users
Connection Parameters Test the connection
Data provider Check for the right universe and duplicate
data
Conditions/Selection criteria Test the for selection
criteria for the correct logic
Object testing Test the objects definitions
Context testing Ensure formula is with input or output
context
Variable testing Test the variable for its syntax
and data type compatible
Formulas or calculations Test the formula for its
syntax and validate the data given by the formula
Filters Test the data has filter correctly
Alerts Check for extreme limits Report alerts
Sorting Test the sorting order of Section headers fields,
blocks
Totals and subtotals Validate the data results
Universe Structure Integrity of universe is maintained
and there are no divergences in terms of joins / objects /
prompts
4.5 User Acceptance Testing
The objective of this testing to ensure that System meets
the expectations of the business users. It aims to prove
that the entire system operates effectively in a production
environment and that the system successfully supports the
business processes from a user's perspective. Essentially,
these tests will run through
a day in the life of
business users. The tests will also include functions that
involve source systems connectivity, jobs scheduling and
Business reports functionality.
4.6 Operational Readiness Testing (ORT)
This is the final phase of testing which focuses on
verifying the deployment of software and the operational
readiness of the application. The main areas of testing in
this phase include:
Deployment Test
1. Tests the deployment of the solution
2. Tests overall technical deployment
checklist and
timeframes
3. Tests the security aspects of the system including
user authentication and authorization, and user-access
levels.
Operational and Business Acceptance Testing
1. Tests the operability of the system including job
control and scheduling.
2. Tests include normal scenarios, abnormal, and fatal
scenarios
5 Test Data
Given the complexity of Data warehouse projects;
preparation of test data is daunting task. Volume of data
required for each level of testing is given below.
Unit Testing - This phase of testing will be performed with
a small subset (20%) of production data for each source
system.
Integration Testing - This phase of testing will be
performed with a small subset of production data for each
source system.
System Testing
This phase of a subset of live data will
be used which is sufficient in volume to contain all
required test conditions that includes normal scenarios,
abnormal, and fatal scenarios but small enough that
workflow execution time does not impact the test schedule
unduly.
6 Conclusion
Data warehouse solutions are becoming almost ubiquitous as
a supporting technology for the operational and strategic
functions at most companies. Data warehouses play an
integral role in business functions as diverse as
enterprise process management and monitoring, and
production of financial statements. The approach described
here combines an understanding of the business rules
applied to the data with the ability to develop and use
testing procedures that check the accuracy of entire data
sets. This level of testing rigor requires additional
effort and more skilled resources. However, by employing
this methodology, the team can be more confident, from day
one of the implementation of the DW, in the quality of the
data. This will build the confidence of the end-user
community, and it will ultimately lead to a more effective
implementation.

1. Which basic tasks  primarily done by ETL tester?

A1. A ETL Tester primarily test source data extraction, business transformation logic and target table loading . There are so many tasks involved for doing the same , which are given below -

1. Stage table / SFS or MFS file created from source upstream system - below checks come under this :
a) Record count Check
b) Reconcile records with source data
c) No Junk data loaded
d) Key or Mandatory Field not missing


e) Duplicate data not there

f) Data type and size check

2) Business transformation logic applied - below checks come under this:

a) Business data check like telephone no can’t be more than 10 digit or character data
b) Record count check for active and passing transformation logic applied
c) Derived Field from the source data is proper
d) Check Data flow from stage to intermediate table
e) Surrogate key generation check if any


3. Target table loading from stage file or table after applying transformation - below check come under this
a) Record count check from intermediate table or file to target table

b) Mandatory or key field data not missing or Null

c) Aggregate or derived value loaded in Fact table

d) Check view created based on target table

e) Truncate and load table check

f) CDC applied on incremental load table

g) Dimension table check & history table check

h) Business rule validation on loaded table

i) Check reports based on loaded fact and dimension table











Q2. Generally how enevironemt variables and paths are set in Unix?

A2. dot(.) profile , normally while logging this will be executed or we can execute as dot(.)  dot(.)profile

3. If a column is added into a table, tell me the test cases you will write for this?

A3. Following test cases you can write for this -


1. Check that particular column data type and size is as per the data model.


2. Check data is getting loaded into that column as per the DEM (data element mapping)


3. Check the valid values , null check and boundary value check for that column


Q4.Let's suppose you are working on a project where requirement keeps changing. How would you tackle this?

A4. If the requirement is getting changed frequently then we need to lot of regression for the same functionality which has been tested. Then you need to be ready with all your input test data and expected result, so after checking changed part , you can run all the test cases and check the results in no time.

Q5. How do you modify your test data while doing the testing?

A5. If input test data is ASCII file, then you can easliy prepare it in notepad+ based on the interface and then ftp it to unix server and if it's table then you can insert the rows into table as per the data model. If file other than ascii format then we can use abinitio graph to covert excel sheet into required format or use other tools are available for doing the same.










Q6. A table has partitions by range on data_dt, suppose it has already defined monthly partitions PTN_01 (values less than (TO_DATE ('01-Feb-2014' , 'dd-mon-yyyy' ))) for january 2014 data only and we are trying to load data for feb 2014 then what will happen? If you find any error then how to solve the same.

A6. It will fetch error “Inserted partition key does not map to any partition” (ORA -14400). It means partition is not there for feb data which we are trying to load, so add a new partition in the table for feb month data as below

Alter table table_name add partition partition_name values less than (TO_DATE ('01-MAR-2014’, 'dd-mon-yyyy'

Note : Remember we can only create new partition for higher value than the previous created partition (it means here we can't add partition for dec 2013 as we have higher value is feb 2014 here.

Q7. How will you connect oracle database from unix server?
A7. sqlplus username/password@dbserver

Q8. If one of the Oracle procedure fetches error - “No data found” then what is the issue here?

A8. In that procedure definitely we are retrieving the data from table and passing it to a variable. If that select statement is not fetching any row then we will get this error.

Q9. If one of your wrapper unix script is fetching the error - “not enough memory” then what you will do?

A9. First we can check the disk usage by command df -h , then we can clean up accordingly and run the script again.

Q10. let's suppose we have to two tables, item (Primary Key : item id)and order (Primary Key  order id, Foreign Key : item id)  . If we try to delete items from order table then will we able to delete? If not then how can we do that?





A10. If we make an attempt to delete or truncate a table with unique or primary keys referenced by foreign keys enabled in another table then we get error: “ORA-02266 unique/primary keys in table referenced by enabled foreign keys”

So, before deleting or truncating the table, disable the foreign key constraints in other tables or delete the data from foreign table item first then from the primary table order here.

Q11. Why do we create index on a table? Please explain

A11. In nutshell I can say - for faster retrieval of data we use indexes, let's suppose I created a table order which will contain billions of data and I know - most of the time I will be querying this table using order id then I should make index on Order table for faster result.

Q12. What will be the default permission of a file created in UNIX? How can we provide all access to all?

A12. When a file is created, the permission flags are set according to the file mode creation mask, which can be set using the "umask" command. If umask value is set as 002 then it means file permission is 664.  (-rw-rw-r--). we can change the permission of file as below :
chmod 777  filename (4: read , 2: write , 1: execute)

Q13. How we can link a defect with a test script in QC?

A13. First we should fail the test case step in test lab, and then we can click on new defect (red color symbol) then enter the defect details there and raise it. Then that defect is linked with that particular step of test case. One more thing as we mention issues in actual result that will come in defect description automatically (no need to put issue details again)

Q14. What are the different methods to load table from files in Oracle? Also tell me methods for teradata.

A14.  SQL Loader, External table loading, loading through driver JDBC  . In teradata we use multiload or fastload usually.





Q15. What are the things you will check before you start testing? What will be your deliverables?

A15. Before starting the testing, requirement document, functional spec . Technical spec , interface, dem  and unit test result should be availble atleast.  My deliverables will be – test plan, test spec, test script, defect summary with root causal analysis, test execution or result report and automation script (if created).

Q16. What do you understand by active and passive transformation in informatica?

A16. Active transformation - 
No or records in input != No of records in output (like - filter, router, source qualifier)

Passive transformation-
No or records in input = No of records in output (like - Expression, look-up, stored procedure)

Q17. Let's suppose we are having order table which are having duplicate order_id. How we can delete the duplicate rows from the table? Tell atleast two methods you know.
A17. First we can do the below :

create table order_new as select distinct * from order ;


drop table order ;


rename order_new to order ;

Note : This method is faster , but we need to recreate index, partitions, constraints....

Second method


delete from order a where rowid > (select min(rowid) from order b where a.order_id = b.order_id);



Note : here we deleting the duplicate rows based on rowid which is uniquely assigned to each row by Oracle.


Q18. How you will find the second highest salary from the employee table? Tell me two methods atleast.

A18. First method – we can use sub query to find this as below :
select max(sal)

from emp
where sal not in (select max(sal) from emp ) ;

Note : first we found the highest salary here then next highest will be the second salary only
Second method – we can use row_number for the same as below :

SELECT empno, sal

FROM

(

select empno, sal, ROW_NUMBER() OVER (order by sal desc) RN

from emp

order by sal desc

)

WHERE RN = 2;

Q19. How we can find out the last two modified file for a particular file mask abc* in unix?

A19. We can do this using very simple command : ls -lrt abc* | tail -2





Note: To check the last command was successful or not – we can use echo $?


20. How you will delete the last line of a file? Tell atleast two methods

A20 first , we can do using Sed as : Sed -i '$d' file_name

second method -


cp file_main file_bkp


sed '$d' file_bkp > file_main


rm -f file_bkp


Note : In direction > operator is used to move all the contents of file into another, >> used to append the one file data into another file.

Q21. Let's suppose we are migrating the data from legacy file system to oracle database tables. How would you validate that the data is migrated propely? Tell me the imp. test scenario and test cases you will write to test this requirement and how will you test that scenario?

A21 . Following scenario can be written for the same:



1) Check that all table DDL as per data model (desc table name )


2) Check that all records are loaded from source to target (match record count)


3) Check null value or junk data is not loaded into table ( check record count by putting not null conditions)


4) Check valid values for columns ( based on group by find all the counts for particular values for a field)


5) Check same data is loaded ( put a join between source table (if there) stage table and taget table and check the result)


6) Check key fields ( add number fields for target and source and match them)


7) Check business logics (create a sheet with input and output)


8) After initial loading, check incremental loading ( insert/ delete /update check all the CDC cases)


9) check the output file layout (if any)
Q43. How to delete one year older file in Unix?
 
A43. Find ./your_dir  -mtime +365  -type  f  -delete
 
Note : f-> file only, -mtime +365 -> 365 days older 
 
 
Q44.  What's the difference between delete and truncate command?
A44 : Difference are given below -



a) Delete is dml statement (which requires commit after that) while Truncate is ddl statement (which is autocommit) 



b) In Delete statement we can put conditions, so we can delete whatever required; while Truncate deletes all the data in tables ( we can't put conditions here)
 
c) Delete  activates a trigger as the individual row deletions are logged, while truncate never activates a trigger (no logs maintained) that's why rollback is possible in Delete but not in Truncate.
 
 
 
 
 
d) Truncate tables always locks the table while Delete Statement uses rowlock before deleting the row.
e)Truncate is faster than delete as it doesn't keep the logs. 
 
Q45.Write a query for the below -  



           To Find out the name of the employees whose salary is greater than their



           manager salary.
A45 Query-



Select EMP.name from Employee Emp, Employee Mgr



where EMP.managerid = Mgr.empid



and Emp.sal > Mgr.sal ;



Q46. What is BPT (Business Process testing) ?



A46. BPT stands for business process testing. It enables SMEs to design test script early in the development cycle. It consist of reusable business components which got converted into business process test.
 
Q47. How can we add partitions in Teradata? Please tell me the query.



A47. Query -
Alter table table_name



modify primary index



Add range between date '2014-04-01' and date '2015-06-01'  Each interval '1' Month;
 
Q48. What is multi-file system (MFS) ? How can we copy a mfs file?
 
A48. Multi-file system (MFS) is a file which is divided into different files. Using MFS file we can process a big input file in efficient manner as files are divided. So, we can process the divided files parallel and make the processing faster.



 
 
We can use the below commands to copy MFS file -
 
m_cp file1 file2
 
Q48. You worked on Informatica then tell me the difference between FILTER  and ROUTER ?



A48. There are many differences between FILTER and Router, which are given below -
 
 a) Filter transformation has a single i/p and single o/p while Router has single i/p and multiple o/p



 b) Router is like Case statement in database while filter is like where clause in database



 c) Router transformation doesn't block any row i.e no. of rows equal in i/p and o/p while filter block rows i.e no. of rows in o/p <= no. of rows in i/p



 
Q49. In Abinitio graph , how can we check DML and XFR for the transformation?



 
 
A49. Right Click on Transformation Properties -> go to file layout -> check path for dml



        (or embedded DML)
       Right Click on Transformation Properties -> check transformation logic -> check path for       XFR(or embedded logic)
 
Q50. Are you aware of IBM Tivoli Scheduler..How can get the Job stream in plan  ? How can we execute that job stream even though dependencies are there?



 
A50. IBM TIVOLI is used to schedule the job streams which calls the wrapper script/graph to perform the job.



Go to database of job stream and then submit the particular job stream from there; it will appear in the Plan. We can increase the priority to high(101) and release all the dependencies to run that stream.



 
Q51. What do you understand by full load and incremental load? 



A52. Full load is basically dumping of entire data from source table to target table . Every time when full load is happening then we are truncating the table and loading the entire data again.



     
        Incremental load is basically loading of delta data from source to target table in



        regular interval so that source and target can be synchronized. while applying the delta 



        into target table, Normally we do capture data change (CDC) i.e what is getting inserted,



        updated or deleted.



 
. How can you create BPT test scripts?

A1. Please follow the below steps to create BPT:
First you have to go Business Components section of ALM,
a.            create a new component there
b.           create component steps with parameters
c.            put parameter variables in bracket like <<< input1 >>>
d.           check it’s reflecting in parameters
e.            add details for business component
f.             Make status ready once all the steps are written.
Now go to the Test Plan in ALM,
a.            Create a new test script; type will be Business Process for the same.
b.           Put all the required details for script
c.            In test script tab, you have to pull component now which you have created (you can pull more than one component as required and integrate them in particular test script)
d.           After that you can see the parameters which you defined earlier for components here (suppose you have defined 4 parameters for the component then you can see 4 i/o parameters , iteration will be default 1 )
e.            If you want to check the same parameters for different iterations then you can increase the iteration here
f.             Now, BPT is ready (you can see green box)

Q2. How can you run BPT test scripts?

A2. Please follow the below steps to run BPT test scripts:

First go to Test Plan in ALM,

a.            Go to test script tab for BPT test scripts, click on iterations
b.           Default 1 iterations is mentioned there, after clicking this, you get the parameters list
c.            Define the variable value which you want to pass for all the parameters
d.           If you want to add one more iteration then, click on add iteration above
e.            Now you can define parameter values for added iterations as well.
f.             After defining all the values, we can pull this script in test lab
Now go to Test Lab in ALM,
a.            Go to the particular BPT which you want to run, click on run with manual runner
b.           Now all the defined parameter values will be substituted
c.            You need to follow the steps now
d.           check all the iterations are checked for mentioned steps and pass it
e.            BPT will be passed.


Q3. How can we link the existing defects with particular test case?

A3. Go to particular test case in Test Lab in ALM, double click on it, windows will appear. Click on Linked Defects tab, then click on Link Existing Defects then we can put defect id and click on Link. That defect will be linked with that particular test case.


Q4. Let’s suppose for making table loading faster, index will be disabled as pre-load activity, and then duplicates got loaded. Again after loading, in pos load activity index will be enabled that time it will fetch error. How to resolve this issue?

A4. You can follow the below steps to mitigate this issue –
a.            First we have to disable primary constraint
b.           drop the index after that
c.            Remove the duplicates using rowid
d.           Now create the index on the table
e.            At last we can enable the primary constraint


Q5.How can we pass the variable value through command box in teradata?

A5. We can use the below query for this –
SELECT COUNT(*)
FROM <TABLE_NAME>
WHERE <date_field> = CAST('?yyyymmdd' AS DATE FORMAT 'yyyymmdd')


Q6.How can we search particular job in control M?

A6.you can follow the below steps to find out the job in control m –
a.            Login to control m
b.           Click customizable filter in view point tab of control m
c.            Filter based on folder name and server name ( we can give wild characters here *)
d.           After filtering we can see all the instances for jobs coming under that folder
e.            In particular folder, all the jobs coming under that will be shown

Q7. How can we check the roles assigned to particular tables/view in teradata?

A7. We can use dbc.allrolerights table to find out the roles assigned to table/view
Select * from dbc.allrolerights where tablename= <table_name>
We can see the fields for role names, access rights, and grantor names here.


Q8. In control m, how can we run the job instantly which is in waiting due to dependencies or schedule?

A8. Do right click on that particular job, free that job and click run now.



Q9. In control m, how can we order a new job which is not scheduled?

A9.Click order button on home tab, select control m server, folder, particular job (if all jobs selected then all the jobs for particular folder will be submitted to run). Also check boxes ignore scheduling criteria and order as independent flow.


Q10. How can we modify command for particular job in control m?

A10. Do right click on particular job, go to properties, click on modify job, now we can change the commands and other things like schedule for that job.



Q11. In Tivoli, some job is stuck due to dependencies and resource constraints then how can we run that job?


A11. Right click on job stream, increase the priority to 101, release all dependencies, and remove time restrictions, then job should run. If still it’s stuck then do the same thing at job level; it will definitely run.



Q12. In Tivoli, how can we submit the jobstream which are not scheduled?

A12. Go to default database jobstream , search that particular jobstream , you can submit that jobstream from there. It will be reflected in scheduled job stream.

Note : If jobstream is in draft mode, then we can't submit that job. Jobstream should be active then only we can submit to plan. 
 
   

No comments:

Post a Comment