Software Testing
Glossary
1 acceptance
testing: Formal testing conducted to enable a user,
customer, or other authorized entity to determine whether to accept a system or
component. [IEEE]
2 actual outcome: The
behavior actually produced when the object is tested
under specified conditions.
3 ad hoc testing: Testing carried out using no
recognised test case design technique.
4 alpha testing: Simulated or actual
operational testing at an in-house site not otherwise
involved with the software developers.
5 arc testing: See branch testing.
6 Backus-Naur form: A metalanguage used
to formally describe the syntax of a language. See BS 6154.
7 basic block: A sequence of one or
more consecutive, executable statements containing no
branches.
8 basis test set: A set of test cases derived from the code logic which ensure that
100\% branch coverage is achieved.
9 be bugging: See error seeding. [Abbott]
10 behavior:
The
combination of input values and preconditions and the required response for a function
of a system. The full specification of a function
would normally comprise one or more behaviours.
11 beta testing: Operational testing at a site not otherwise
involved with the software developers.
12 big-bang testing: Integration testing where no incremental testing takes place prior to all the
system's components being combined to form the system.
13 black box testing: See functional test case design.
14 bottom-up testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components.
The process is repeated until the component at the top
of the hierarchy is tested.
15 boundary value: An input value or output value which is on the boundary between equivalence classes, or an incremental distance
either side of the boundary.
16 boundary value analysis: A test case design technique for a component
in which test cases are designed which include
representatives of boundary values.
17 boundary value
coverage:
The percentage of boundary values of the
component's equivalence classes which have been
exercised by a test case
suite.
18 boundary value
testing:
See boundary value analysis.
19 branch:
A
conditional transfer of control from any statement to
any other statement in a component,
or an unconditional transfer of control from any statement
to any other statement in the component
except the next statement, or when a component has more than one entry
point, a transfer of control to an entry point of the component.
20 branch condition: See decision condition.
21 branch condition
combination coverage:
The percentage of combinations of all branch condition outcomes in every decision that have been exercised
by a test case suite.
22 branch condition combination testing: A test case design technique in which test
cases are designed to execute combinations of branch condition outcomes.
23 branch condition coverage: The percentage of
branch condition outcomes in every decision that have
been exercised by a test case suite.
24 branch condition
testing:
A test case design technique in which test cases are designed to execute branch condition
outcomes.
25 branch
coverage:
The percentage of branches that have been exercised by a test case suite
26 branch outcome: See decision
outcome.
27 branch point: See decision.
28 branch testing: A test case design technique for a component
in which test cases are designed to execute branch outcomes.
29 bug: See fault.
30 bug seeding: See error seeding.
31 C-use:
See computation data use.
32 capture/playback tool: A test tool that records test input as
it is sent to the software under test. The input cases stored can then be used
to reproduce the test at a later time.
33 capture/replay
tool:
See capture/playback tool.
34 CAST: Acronym for
computer-aided software testing.
35 cause-effect graph: A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.
36 cause-effect
graphing:
A test case design technique in which test cases are designed by consideration of cause-effect graphs.
37 certification: The process of
confirming that a system or component complies with
its specified requirements and is acceptable for operational use. From [IEEE].
38 Chow's coverage
metrics:
See N-switch coverage. [Chow]
39 code coverage: An analysis method
that determines which parts of the software have been executed (covered) by the
test case suite and which parts have not been
executed and therefore may require additional attention.
40 code-based
testing:
Designing tests based on objectives derived from the implementation (e.g.,
tests that execute specific control flow paths or
use specific data items).
41 compatibility
testing:
Testing whether the system is compatible with other
systems with which it should communicate.
42 complete path
testing:
See exhaustive testing.
44 component testing: The testing of
individual software components. After [IEEE].
46 condition:
A
Boolean expression containing no Boolean operators. For instance, A<B is a condition but A and B is not.
[DO-178B]
47 condition
coverage:
See branch condition coverage.
48 condition outcome: The evaluation of a condition to TRUE or FALSE.
49 conformance
criterion:
Some method of judging whether or not the component's
action on a particular specified input value
conforms to the specification.
50 conformance
testing:
The process of testing that an implementation conforms
to the specification on which it is based.
51 control flow: An abstract representation of all possible
sequences of events in a program's execution.
52 control flow
graph:
The diagrammatic representation of the possible alternative control flow paths through a component.
53 control flow path: See path.
54 conversion
testing:
Testing of programs or procedures used to convert data
from existing systems for use in replacement systems.
55 correctness: The degree to which software conforms to its specification.
56 coverage:
The
degree, expressed as a percentage, to which a specified coverage
item has been exercised by a test case suite.
57 coverage item: An entity or property used as a basis for testing.
58 data
definition: An
executable statement where a variable is assigned a
value.
59 data definition C-use coverage: The percentage of data definition C-use pairs in a component
that are exercised by a test
case suite.
60 data definition C-use pair: A data
definition and computation data use, where
the data use uses the value defined in the data definition.
61 data definition P-use coverage: The percentage of data definition P-use pairs in a component
that are exercised by a test
case suite.
62 data
definition P-use pair: A data definition and predicate data use, where the data
use uses the value defined in the data definition.
63 data definition-use coverage: The percentage of data definition-use pairs in a component
that are exercised by a test
case suite.
64 data definition-use pair: A data
definition and data use, where the data
use uses the value defined in the data definition.
65 data
definition-use testing: A test case design technique for
a component in which test cases are
designed to execute data definition-use pairs.
66 data flow
coverage:
Test coverage measure based on variable usage
within the code. Examples are data definition-use
coverage, data definition P-use coverage, data definition C-use coverage, etc.
67 data flow testing: Testing
in which test cases are designed based on variable
usage within the code.
68 data
use: An
executable statement where the value of a variable is
accessed.
69 debugging: The process of
finding and removing the causes of failures in software.
70 decision:
A
program point at which the control flow has two or
more alternative routes.
72 decision coverage: The percentage of decision outcomes that have been exercised by a test case suite.
73 decision outcome: The result of a decision
(which therefore determines the control flow alternative
taken).
74 design-based
testing:
Designing tests based on objectives derived from the architectural or detail
design of the software (e.g., tests that execute specific invocation paths or
probe the worst case behavior of algorithms).
75 desk checking: The testing of software by the manual simulation
of its execution.
76 dirty testing: See negative testing. [Beizer]
77 documentation
testing:
Testing concerned with the accuracy of documentation.
78 domain: The set from which
values are selected.
79 domain testing: See equivalence partition testing.
80 dynamic analyses: The process of
evaluating a system or component based upon its behavior during execution. [IEEE]
81 emulator: A device, computer
program, or system that accepts the same inputs and
produces the same outputs as a given system. [IEEE,
do178b]
82 entry point: The first executable
statement within a component.
83 equivalence class: A portion of the component's
input or output domains for
which the component's behaviour is assumed to be the
same from the component's specification.
84 equivalence
partition:
See equivalence class.
85 equivalence
partition coverage:
The percentage of equivalence classes generated
for the component, which have been exercised
by a test case suite.
86 equivalence partition testing: A test case design technique for a component
in which test cases are designed to execute
representatives from equivalence classes.
88 error guessing: A test case design technique where the experience of the
tester is used to postulate what faults might occur, and
to design tests specifically to expose them.
89 error seeding: The process of intentionally adding known faults to those already in a computer program for the purpose
of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. [IEEE]
90 executable statement: A statement
which, when compiled, is translated into object code, which will be executed
procedurally when the program is running and may perform an action on program
data.
91
exercised: A
program element is exercised by a test
case when the input value causes the execution of
that element, such as a statement, branch,
or other structural element.
92 exhaustive testing: A test case
design technique in which the test case suite comprises
all combinations of input values and preconditions for component
variables.
93 exit
point: The
last executable statement within a component.
94 expected
outcome: See
predicted outcome.
95 facility testing: See functional test case design.
97 fault: A manifestation of an
error in software. A fault, if
encountered may cause a failure. [do178b]
98 feasible path: A path for which
there exists a set of input values and execution
conditions which causes it to be executed.
99 feature testing: See functional test case design.
100 functional
specification:
The document that describes in detail the characteristics of the product with
regard to its intended capability. [BS 4778, Part2]
101 functional test case design: Test
case selection that is based on an analysis of the specification
of the component without reference to its internal
workings.
102 glass box
testing:
See structural test case design.
103 incremental testing: Integration testing where system components are integrated into the system one at a time
until the entire system is integrated.
104 independence: Separation of
responsibilities which ensures the accomplishment of objective evaluation.
After [do178b].
105 infeasible path: A path
which cannot be exercised by any set of possible input values.
106 input:
A
variable (whether stored within a component or outside
it) that is read by the component.
107 input domain: The set of all
possible inputs.
108 input value: An instance of an input.
109 inspection: A group review quality
improvement process for written material. It consists of two aspects; product
(document itself) improvement and process improvement (of both document
production and inspection). After [Graham]
110 installability
testing:
Testing concerned with the installation procedures for
the system.
111 instrumentation: The insertion of additional code into
the program in order to collect information about program behaviour
during program execution.
112 instrumenter: A software tool used to carry out instrumentation.
113 integration: The process of combining components
into larger assemblies.
114 integration testing: Testing
performed to expose faults in the interfaces and in the
interaction between integrated components.
115 interface
testing:
Integration testing where the interfaces
between system components are tested.
116 isolation
testing:
Component testing of individual components in isolation from surrounding components, with surrounding components
being simulated by stubs.
117 LCSAJ:
A
Linear Code Sequence And Jump, consisting of the following three items
(conventionally identified by line numbers in a source code listing): the start
of the linear sequence of executable statements, the
end of the linear sequence, and the target line to which control
flow is transferred at the end of the linear sequence.
118 LCSAJ coverage: The percentage of LCSAJs of a component which are exercised by a test case suite.
119 LCSAJ testing: A test case design technique for a component
in which test cases are designed to execute LCSAJs.
120 logic-coverage
testing:
See structural test case design. [Myers]
121 logic-driven
testing:
See structural test case design.
122 maintainability testing: Testing
whether the system meets its specified objectives for maintainability.
123 modified
condition/decision coverage: The percentage of all branch condition
outcomes that independently affect a decision outcome
that have been exercised by a test case suite.
124 modified
condition/decision testing: A test case design technique in
which test cases are designed to execute branch
condition outcomes that independently affect a decision
outcome.
125 multiple
condition coverage:
See branch condition combination coverage.
126 mutation
analysis:
A method to determine test case suite thoroughness
by measuring the extent to which a test case suite can
discriminate the program from slight variants (mutants) of the program. See
also error seeding.
127 N-switch
coverage: The
percentage of sequences of N-transitions that have
been exercised by a test case
suite.
128 N-switch testing: A form of state transition testing in which test cases are designed to execute all valid sequences of N-transitions.
130 negative
testing: Testing aimed at showing software does not work. [Beizer]
131 non-functional requirements testing: Testing
of those requirements that do not relate to functionality. i.e. performance,
usability, etc.
132 operational testing:Testing
conducted to evaluate a system or component in its
operational environment. [IEEE]
133 oracle: A mechanism to
produce the predicted outcomes to compare with
the actual outcomes of the software under test.
After [adrion]
134 outcome:
Actual outcome or predicted
outcome. This is the outcome of a test. See also branch
outcome, condition outcome and decision outcome.
135 output:
A
variable (whether stored within a component or outside
it) that is written to by the component.
136 output domain: The set of all possible outputs.
137 output value: An instance of an output.
138 P-use: See predicate data use.
139 partition
testing:
See equivalence partition testing.
[Beizer]
140 path:
A
sequence of executable statements of a component, from an entry point to
an exit point.
141 path coverage: The percentage of paths in a component exercised by a test case suite.
142 path sensitizing:
Choosing
a set of input values to force the execution of a component to take a given path.
143 path testing: A test case design technique in which test cases are designed to execute paths
of a component.
144 performance
testing:
Testing conducted to evaluate the compliance of a system
or component with specified performance requirements.
[IEEE]
145 portability
testing:
Testing aimed at demonstrating the software can be
ported to specified hardware or software platforms.
146 precondition: Environmental and state conditions which must
be fulfilled before the component can be executed with
a particular input value.
147 predicate: A logical expression which evaluates to TRUE
or FALSE, normally to direct the execution path in code.
149 predicted
outcome:
The behaviour predicted by the specification
of an object under specified conditions.
150 program
instrumenter:
See instrumenter.
151 progressive
testing:
Testing of new features after regression
testing of previous features. [Beizer]
152 pseudo-random: A series which
appears to be random but is in fact generated according to some prearranged
sequence.
153 recovery testing: Testing
aimed at verifying the system's ability to recover from varying degrees of failure.
154 regression testing:
Retesting
of a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of
the changes made.
155
requirements-based testing: Designing tests based on objectives derived from
requirements for the software component (e.g., tests that exercise specific
functions or probe the non-functional constraints such as performance or
security). See functional test case design.
156 result: See outcome.
157 review:
A
process or meeting during which a work product, or set of work products, is
presented to project personnel, managers, users or other interested parties for
comment or approval. [ieee]
158 security testing: Testing
whether the system meets its specified security objectives.
159 serviceability
testing:
See maintainability testing.
160 simple subpath: A subpath of the control flow graph in which no program part is
executed more than necessary.
161 simulation: The representation of selected behavioural
characteristics of one physical or abstract system by another system. [ISO
2382/1].
162 simulator: A device, computer
program or system used during software verification, which
behaves or operates like a given system when provided with a set of controlled inputs. [IEEE,do178b]
163 source statement: See statement.
specification: A description of a component's function in terms of its output values for specified
input values under specified preconditions.
165 specified input: An input for
which the specification predicts an outcome.
166 state transition:
A
transition between two allowable states of a system or component.
167
state transition testing: A test case design technique in
which test cases are designed to execute state transitions.
168 statement: An entity in a
programming language which is typically the smallest indivisible unit of
execution.
169 statement
coverage:
The percentage of executable statements in a component that have been exercised
by a test case suite.
170 statement
testing:
A test case design technique for a component in which test cases are
designed to execute statements.
172 static analyzer: A tool that carries
out static analysis.
173 static testing: Testing
of an object without execution on a computer.
174 statistical testing: A test case design technique in which a model is used
of the statistical distribution of the input to construct
representative test cases.
175 storage testing: Testing
whether the system meets its specified storage objectives.
176 stress testing: Testing
conducted to evaluate a system or component at or beyond the limits of its
specified requirements. [IEEE]
177 structural
coverage:
Coverage measures based on the internal structure of the component.
178 structural test case design: Test
case selection that is based on an analysis of the internal structure of
the component.
179 structural
testing:
See structural test case design.
180 structured basis
testing:
A test case design technique in which test cases are derived from the code logic to achieve 100%
branch coverage.
181 structured
walkthrough:
See walkthrough.
182 stub:
A
skeletal or special-purpose implementation of a software module, used to
develop or test a component that calls or is otherwise
dependent on it. After [IEEE].
183 subpath: A sequence of executable statements within a component.
184 symbolic
evaluation:
See symbolic execution.
185 symbolic
execution: A
static analysis technique that derives a symbolic
expression for program paths.
186 syntax testing: A test case design technique for a component
or system in which test case design is based upon the
syntax of the input.
187 system testing: The process of testing an
integrated system to verify that it meets specified requirements. [Hetzel]
188 technical
requirements testing:
See non-functional requirements testing.
189 test automation: The use of software
to control the execution of tests, the comparison of actual
outcomes to predicted outcomes, the setting
up of test preconditions, and other test control
and test reporting functions.
190 test
case: A
set of inputs, execution preconditions,
and expected outcomes developed for a particular
objective, such as to exercise a particular program path or
to verify compliance with a specific requirement. After [IEEE,do178b]
191 test case design technique: A method used to
derive or select test cases.
192 test case suite: A collection of one or more test cases for the software under test.
193 test comparator: A test tool that compares the actual outputs produced by the software under test with the
expected outputs for that test case.
194 test completion criterion: A criterion for
determining when planned testing is complete, defined in
terms of a test measurement technique.
195 test coverage: See coverage.
196 test driver: A program or test tool used to execute
software against a test case suite.
197 test environment: A description of the hardware and software
environment in which the tests will be run, and any other software with which
the software under test interacts when under test including stubs
and test drivers.
198 test execution: The processing of a test
case suite by the software under test, producing an outcome.
199 test execution
technique:
The method used to perform the actual test execution,
e.g. manual, capture/playback tool, etc.
200 test generator: A program that
generates test cases in accordance to a specified
strategy or heuristic. After [Beizer].
201 test harness: A testing tool that
comprises a test driver and a test
comparator.
202 test measurement technique: A method used to
measure test coverage items.
203 test outcome: See outcome.
204 test plan: A record of the test
planning process detailing the degree of tester indedendence, the test environment, the test case
design techniques and test measurement
techniques to be used, and the rationale for their choice.
205 test
procedure: A
document providing detailed instructions for the execution of one or more test cases.
206 test records: For each test, an
unambiguous record of the identities and versions of the component
under test, the test specification, and actual outcome.
207 test script: Commonly used to
refer to the automated test procedure used with a test harness.
208 test
specification: For
each test case, the coverage
item, the initial state of the software under test, the input,
and the predicted outcome.
209 test target: A set of test completion criteria.
210
testing: The process of exercising
software to verify that it satisfies specified requirements and to detect errors. After [do178b]
211 thread testing: A variation of top-down testing where the progressive integration of components
follows the implementation of subsets of the requirements, as opposed to the integration of components by
successively lower levels.
212 top-down
testing: An
approach to integration testing where the component at the top of the component
hierarchy is tested first, with lower level components
being simulated by stubs. Tested components
are then used to test lower level components. The
process is repeated until the lowest level components
have been tested.
213 unit testing: See component testing.
214 usability
testing:
Testing the ease with which users can learn and use a
product.
215 validation: Determination of the
correctness of the products of software development
with respect to the user needs and requirements.
216
verification: The
process of evaluating a system or component to determine
whether the products of the given development phase satisfy the conditions
imposed at the start of that phase. [IEEE]
217 volume testing: Testing
where the system is subjected to large volumes of data.
218 walkthrough: A review
of requirements, designs or code characterized by the author of the object
under review guiding the progression of the review.
219 white box testing: See structural test case design.
These
definitions have been extracted from Version 6.2 of the British Computer Society
Specialist Interest Group in Software Testing (BCS SIGIST) Glossary of
Testing Terms Version 6.2.
<!--msnavigation-->
<!--msnavigation-->
<!--msnavigation-->
No comments:
Post a Comment