Friday, 24 July 2015

ENTIRE TESTING MATERIAL

 
1) Equivalence partitioning is a software testing technique that divides the input data into partition of data from which test cases can be derived. In principle, test cases are designed to cover each partition at least once. There by reducing the total number of test cases that must be developed.
 
In rare cases equivalence partitioning is also applied to outputs of a software component, typically it is applied to the inputs of a tested component. The equivalence partitions are usually derived from the requirements specification for input attributes that influence the processing of the test object. An input has certain ranges which are valid and other ranges which are invalid. Invalid data here does not mean that the data is incorrect, it means that this data lies outside of specific partition. This may be best explained by the example of a function which takes a parameter "month". The valid range for the month is 1 to 12, representing January to December. This valid range is called a partition. In this example there are two further partitions of invalid ranges. The first invalid partition would be <= 0 and the second invalid partition would be >= 13.
 
2) BVA is desisn teacnique works with an principle 
1. max,max-1,max+1
2.min,min-1,min+1
for eg: if u need to test an value 100 in combo box u need 
to test
then u need to test only in 6 possible conditions by 
paasing these values
in case of max  :100,100+1,100-1
in case of min :0,0-1,0+1
 
 
 
3) Software Development Life Cycle

The Various activities which are undertaken when developing a software are commonly modeled as Software Development Life Cycle. It begins with identification of requirements for a software and ends with the formal verification of the developed software against the requirements.

Popular Models are:
1. V-Model
2. Spiral Model
3. Water Fall Model










The V-model is a software development process.

Requirements analysis

In the Requirements analysis phase, the requirements of the proposed system are collected by analyzing the needs of the user(s). This phase is concerned about establishing what the ideal system has to perform. However it does not determine how the software will be designed or built. Usually, the users are interviewed and a document called the user requirements document is generated.
The user requirements document will typically describe the system’s functional, physical, interface, performance, data, security requirements etc as expected by the user. It is one which the business analysts use to communicate their understanding of the system back to the users. The users carefully review this document as this document would serve as the guideline for the system designers in the system design phase. The user acceptance tests are designed in this phase. See also Functional requirements, and Non-functional requirements

System Design

Systems design is the phase where system engineers analyze and understand the business of the proposed system by studying the user requirements document. They figure out possibilities and techniques by which the user requirements can be implemented. If any of the requirements are not feasible, the user is informed of the issue. A resolution is found and the user requirement document is edited accordingly.
The software specification document which serves as a blueprint for the development phase is generated. This document contains the general system organization, menu structures, data structures etc. It may also hold example business scenarios, sample windows, reports for the better understanding. Other technical documentation like entity diagrams, data dictionary will also be produced in this phase. The documents for system testing is prepared in this phase.

Architecture Design

The phase of the design of computer architecture and software architecture can also be referred to as high-level design. The baseline in selecting the architecture is that it should realize all which typically consists of the list of modules, brief functionality of each module, their interface relationships, dependencies, database tables, architecture diagrams, technology details etc. The integration testing design is carried out in this phase.

Module Design

The module design phase can also be referred to as low-level design. The designed system is broken up into smaller units or modules and each of them is explained so that the programmer can start coding directly. The low level design document or program specifications will contain a detailed functional logic of the module, in pseudocode:
  • database tables, with all elements, including their type and size
  • all interface details with complete API references
  • all dependency issues
  • error message listings
  • complete input and outputs for a module.
The unit test design is developed in this stage.

Validation Phases

Unit Testing

In the V-model of software development, unit testing implies the first stage of dynamic testing process. According to software development expert Barry Boehm, a fault discovered and corrected in the unit testing phase is more than a hundred times cheaper than if it is done after delivery to the customer.
It involves analysis of the written code with the intention of eliminating errors. It also verifies that the codes are efficient and adheres to the adopted coding standards. Testing is usually white box. It is done using the Unit test design prepared during the module design phase. This may be carried out by software developers.

Integration Testing

In integration testing the separate modules will be tested together to expose faults in the interfaces and in the interaction between integrated components. Testing is usually black box as the code is not directly checked for errors.

System Testing

System testing will compare the system specifications against the actual system. The system test design is derived from the system design documents and is used in this phase. Sometimes system testing is automated using testing tools. Once all the modules are integrated several errors may arise. Testing done at this stage is called system testing.

 User Acceptance Testing

Acceptance testing is the phase of testing used to determine whether a system satisfies the requirements specified in the requirements analysis phase. The acceptance test design is derived from the requirements document. The acceptance test phase is the phase used by the customer to determine whether to accept the system or not.
Spiral Model
 
 
 
 
 
4) Alpha and Beta Testing
 
Alpha Testing is conducted at Developer’s site by a Customer, the Customer uses the Software and at the same time Developer records the usage problems and Errors. It is conducted in the controlled environment
 
Beta Testing is conducted at one or more Customer site by the end user. It is live testing, in an environment not controlled by the Developers. The Customer records the usage problems and errors and reports to the Developers.
 
Integration testing is the activity of software testing in which individual software modules are combined and tested as a group. In this testing, the integrated modules are communicating properly or not is tested. It occurs after unit testing and before system testing. Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
 
Ad hoc testing is a commonly used term for software testing performed without planning and documentation.
 
 
Ad hoc testing is a part of exploratory testing, being the least formal of test methods. Tester seeks to find bugs with any means that seem appropriate. It contrasts to regression testing that looks for a specific issue with detailed reproduction steps, and a clear expected result. Ad hoc testing is most often used as a complement to other types of testing.
 
 
5) Bug Life Cycle: 
 
 
1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved. 
 
2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”. 
 
3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”. 
 
4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team. 
 
5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software. 
 
6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”. 
 
7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”. 
 
8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”. 
 
9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again. 
 
10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved. 
 
While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects
 
 
6) Traceability Matrix is a document, usually in the form of a table, that correlates any two baseline documents that require a many to many relationship to determine the completeness of the relationship. For instance a requirements traceability matrix is used to check to see if the current project requirements are being met, and to help in the creation of a Request for Proposal, various deliverable documents, and project plan tasks.[1]
 
 
7) Levels of the Capability Maturity Model
 
There are five levels defined along the continuum of the CMM[9], and, according to the SEI: "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief."
 
Level 1 - Ad hoc (Initial) 
It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes. 
 
Level 2 - Repeatable 
It is characteristic of processes at this level that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress. 
 
Level 3 - Defined 
It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place (i.e., they are the AS-IS processes) and used to establish consistency of process performance across the organization. 
 
Level 4 - Managed 
It is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level. 
 
Level 5 - Optimized 
It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements. 
 
At maturity level 5, processes are concerned with addressing statistical common causes of process variation and changing the process (for example, to shift the mean of the process performance) to improve process performance. This would be done at the same time as maintaining the likelihood of achieving the established quantitative process-improvement objectives
 
8) The Deliverables from the Test team would include the following:
1. Test Strategy.
2. Test Plan.
3. Test Case Documents.
4. Defect Reports.
5. Status Reports (Daily/weekly/Monthly).
6. Test Scripts (if any).
7. Metric Reports.
8. Product Sign off Document.


9) Defect Reports:

1. S.no 

2. Iteration 

3. Location of Defect 

4. Defect Description 

5. Defect Severity 

6. Disposition-----Accept or Reject 

7. Defect Cause-----Logic User Interface Design Issue ect… 

8. Source Phase of Defect---Requirement low level Design High Level design ect… 

9. Proposed Corrective Action----Rework Raise change request
10. Environment
11. Step
12. Attachments 

13. Status-----Open Closed CR Raised 

14. Remark


10) What are the different types of estimation methods available in market for Testing?
 
Experience Based - Analogies and experts: 
 
1) Metrics collected from previous tests. 
2) You already tested similar application in previous project. 
3) Inputs are taken from Subject Matter experts who know the application (as well as testing) very well. 
 
Three-point estimation This technique is based on statistical methods In this technique, task is broken down into subtasks (similar to WBS) and then three types on estimation are done on each chunk – 
Optimistic Estimate (Best case scenario in which nothing goes wrong and all conditions are optimal.) = a 
Most Likely Estimate (most likely duration and there may be some problem but most of the things will go right.) = m 
Pessimistic Estimate (worst case scenario which everything goes wrong.) = b 
Formula to find Value for Estimate (E) = a + (4*m) + b / 6 

Standard Deviation (SD) = = (b - a)/6 

More information about this technique can be find at:
 
Work Breakdown Structure - It is created by breaking down the test project into small pieces. Modules are divided into sub-modules. Sub modules are further divided into functionalities and functionalities are divided in sub-functionalities.
Review all the requirements from Requirement Document to make sure they are added in WBS. Now you figure out the number of tasks your team needs to complete. Estimate the duration of each task.

Delphi technique – Same as above WBS. Here functionalities and each task is allocated to each team member. Then team member gives estimate that they will take this much hours to complete the task.
Averagely, this technique gives good confidence in the estimation. This technique can be combined with other techniques.

Use case point estimation method: Use case point (UCP) method is gaining popularity because now-a-days application development is modelled around use case specification. The test case development is normally kicked off after baseline use case. So the various factors in use case give a direct proportion to the testing effort. 
Use case is a document which well specifies different users, systems or other stakeholders interacting with the concerned application. They are named as ‘Actors’. The interactions accomplish some defined goals protecting the interest of all stakeholders through different behaviour or flow termed as scenarios.

11) What is the difference between priority and severity?
                                                                                                 
Priority: is associated with scheduling, Priority means something deserves prior attention, order of importance (or urgency)

Severity: The severity of a problem is defined in accordance to the customer’s risk assessment

High Severity and Low Priority- If the application crashes after multiple use of any functionality (example--save Button use 200 times then that application will crash)
Means High Severity because application crashed but Low Priority because no need to debug right now
You can debug it later.

High Priority and Low Severity- If any Web site say Yahoo now if the logo of site Yahoo spell s Yho
-Then Priority is high but severity is low.

Because it effect the name of site so important to do quick ---Priority
But it is not going to crash because of spell change so severity low.

 
12) What is Risk?
 
“Risk are future uncertain events with a probability of occurrence and a potential for loss”
Risk identification and management are the main concerns in every software project. Effective analysis of software risks will help to effective planning and assignments of work.
Risks are identified, classified and managed before actual execution of program. These risks are classified in different categories.





Categories of risks:

Schedule Risk:

Project schedule get slip when project tasks and schedule release risks are not addressed properly.
Schedule risks mainly affect on project and finally on company economy and may lead to project failure.
Schedules often slip due to following reasons:

  • Wrong time estimation
  • Resources are not tracked properly. All resources like staff, systems, skills of individuals etc.
  • Failure to identify complex functionalities and time required to develop those functionalities.
  • Unexpected project scope expansions.

Budget Risk:
  •  Wrong budget estimation.
  • Cost overruns
  • Project scope expansion
Operational Risks:
Risks of loss due to improper process implementation, failed system or some external events risks.
Causes of Operational risks:
  • Failure to address priority conflicts
  • Failure to resolve the responsibilities
  • Insufficient resources
  • No proper subject training
  • No resource planning
  • No communication in team.
Technical risks:
Technical risks generally leads to failure of functionality and performance.
Causes of technical risks are:
  • Continuous changing requirements
  • No advanced technology available or the existing technology is in initial stages.
  • Product is complex to implement.
  • Difficult project modules integration.

Programmatic Risks:
These are the external risks beyond the operational limits. These are all uncertain risks are outside the control of the program.
These external events can be:
  •   Running out of fund.
  •  Market development
  •  Changing customer product strategy and priority
  •  Government rule changes.

 
13) Web Testing: Complete guide on testing web applications
Web testing checklist:
1) Functionality Testing
2) Compatibility testing
3) Performance testing
4) Security testing

1) Functionality Testing:
Test for - all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing.
Check all the links:
  • Test all the links from all the pages from specific domain under test.
  • Test links jumping on the same pages.
  •  Check for broken links in all above-mentioned links.
Cookies testing:
Cookies are small files stored on user machine. These are basically used to maintain the session mainly login sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies.
Test for navigation:
Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links on the pages to surf different pages.


Content checking:
Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and should not be used in site theme. You can follow some standards that are used for web page and content building. These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly with proper sizes.

These are some basic standards that should be followed in web development. Your task is to validate all for UI testing
2) Compatibility Testing:
Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:

  • Browser compatibility
  • Operating system compatibility
  • Mobile browsing
Mobile browsing:
This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile.

3) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:


  • Web Load Testing
  • Web Stress Testing
4) Security Testing:
Following are some test cases for web security testing:
  • Test by pasting internal url directly into browser address bar without login. Internal pages should not open.
  • If you are logged in using username and password and browsing internal pages then try changing url options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing the url site ID parameter to different site ID which is not related to logged in user. Access should denied for this user to view others stats.
  • Try some invalid inputs in input fields like login username, password, input text boxes. Check the system reaction on all invalid inputs.
  • Web directories or files should not be accessible directly unless given download option.
  • Test the CAPTCHA for automates scripts logins.
  • Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa.
  • All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.
 
 
 
 
 
14) What is Black Box and White Box Testing?
 
Black box testing takes an external perspective of the test object to derive test cases. These tests are functional. The test designer selects valid and invalid inputs and determines the correct output. There is no knowledge of the internal structure.
 
This method of test design is applicable to all levels of software testing: unit, integration, functional testing, system and acceptance. 
 
 
White box testing (clear box testing, glass box testing, transparent box testing) uses an internal perspective of the system to design test cases based on internal structure. It requires programming skills to identify all paths through the software. The tester chooses test case inputs to exercise paths through the code and determines the appropriate outputs
 
Black Box Testing vs White Box Testing: it doesn't explicitly use knowledge of the internal structure. Black-box testing usually focuses on testing functional requirements. The test designer selects valid and invalid inputs and determines the correct output. There is no knowledge of the internal structure. Synonyms for black-box include: behavioral, functional, opaque-box.
White-box test design allows one to peek inside the code, it uses the internal structure to design the Test data, it requires programming skill to identify all the paths. Synonyms for white-box include: structural, glass-box and clear-box.




Unit Testing: Unit testing is a software verification and validation method in which a programmer tests if individual units of source code are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual function or procedure.
 
Integration testing: Integration testing is an activity of software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before system testing. 
 
System Testing: After completion of integration testing, development people release S/W build to the separate testing team. This separate testing team is validating the S/W build with respect to customer requirements. In this level of testing the separate testing team uses block-box testing technique. 
These techniques are classifying into three categories.
 
·         Usability / Accessibility Testing
·         Functional Testing
·         Non-Functional Testing
 
System Testing VS Integration Testing: System testing covers the end to end functionality of the software product, the whole systems functionality is tested, where as Integration testing is to test whether the integrated modules are communicating properly or not is tested.
 
 
 
 
 
 
SOFTWARE DEVELOPMENT PROCESSING

REQIREMENT GATHERING

ANALAYSIS AND PLANING

DESIGN

CODING

TESTING
 

          RELEASE & MAINTAINENCE
 
 
Verification (QA) and Validation (QC):
 
SQA: Monitory and measuring the strength of development process is called SQA.
 
SQC: The validation of the S/W product with respect to the customer requirements and expectation called SQC.

REVIEW: Determining the completeness and correctness of documents by responsible people through Walkthrough, Inspection and peer-review is REVIEW.
 
Walkthrough: Checking from first line to last line 
 
Inspection search and faker
 
Peer-review: Comparing one Document with another document for each point word to word.
 
PROTO TYPE: A sample model of software is called proto type. It’s consists of interface (screen) with out functionality.
 
Entry criteria:
1) All source codes are unit tested
2) All QA resource has enough functional knowledge
3) H/W and s/w are in place (A separate QA environment with its own 
webserver database and Application server instance must be available)
4) Test plans and test cases are reviewed and signed off
5) Test Data should also be available
6) There should not be any Show Stoppers
 
Exit criteria:
1) No defect over a period of time or testing effort
2) Planned deliverables are ready
3) High severity defects are fixed
 
 
 
TESTING TERMINOLOGY
 
Testing Strategy: It’s a document and it does define the required Testing approach to be followed by testing people.
 
Test Plan: It’s a document and its provides work allocation and in terms of schedule 
 
Test Case: It does define test condition to validate functionality in term of completeness and correctness.
 
Good Test Case: Is one which has the probability of finding the defect is called as good test case.
 
Test Log: It’s defining the result of test case in term of passed (or) fail of execution of testing case application build.
 
Error-Defect (Issue) -& Debug: a). A mistake in a coding is called ERROR b) This mistake found by Test engineer, during Testing called DEFECT/ISSUE. C). this defect/issue review & Accepted by development team to release is called BUG.
 
Re-Testing: It’s also known as data driven (or) iterative testing. Test engineers repeat the same test on same application build with multiple I/P values. This type of test repetition is called Re-Testing.
 
Regression: The Re-execution of scheduled test cases on modified build to ensure, bug-fix work without any side effect is called regression test.
 
 
Test Strategy VS Test Plan

TEST STRATEGY is a company level document developed by quality analyst people. This document defines the testing approach to be followed by the testing team. It consists of

1) Scope and objective: The purpose of testing in an organization
2) Business issues: Budget control for testing
3) Testing approach: That is the TRM (Test responsibility Matrix)
4) Test Deliverables: Names of the testing documents to be prepared by the Testing Team.
5) Roles and Responsibilities: Names of jobs in the testing team and their responsibilities
6)Communication and status reporting: Required negotiation between two jobs in a team
7) Test automation and Tools: Availability of testing tools and purpose of automation
8) Defect reporting and Tracking: required negotiation between testing and development teams
9) Risks and Mitigations: Expected failures during testing and solutions to overcome
10) Change and Configuration management: How to handle sudden changes in customer requirements.
11) Training plan:

Test Plan: on the other hand Test Plan is a document which defines basically…
1) what to test?
2) How to test?
3) When to test?
4) Who to test?

Documentation specifying the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, responsibilities, required resources, and any risks requiring contingency planning

Test lead prepares this document. It is normally defined in IEEE829 format which is as follows

1) Test plan ID : Unique number or name
2) Introduction : About project
3) Test Items: Names of all modules in the project.
4) Features to be tested:
5) Features not to be tested:
6) Testing approach: finalized TRM, selected testing techniques
7) Entry Criteria : When testing can be started
8) Exit Criteria : when testing can be stopped
9) Fail or Pass criteria:
10)Test Environment: The environment where the test is to be carried out.
11)Test Deliverables:
12)Staff and training
13)Roles and Responsibilities
14)Project starting and End dates
15)Risks and mitigations.

Legacy System: it is the old or out dated system which is use.

Agile Testing: Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing.
 
 
Type of Testing:
 
  • Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
  • White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
  • Unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
  • Integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
  • Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
  • System testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
  • End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
  • Sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
  • Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing approaches can be especially useful for this type of testing.
  • Acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
  • Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
  • Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
  • Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
  • Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
  • Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
  • Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
  • Compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
  • Ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
  • User acceptance testing - determining if software is satisfactory to an end-user or customer.
  • Comparison testing - comparing software weaknesses and strengths to competing products.
  • Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
  • Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
  • Mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
  • Agile testing does not emphasize rigidly defined testing procedures, but rather focuses on testing iteratively against newly developed code until quality is achieved from an end customer's perspective. Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing.
 
 

No comments:

Post a Comment