SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
ISTQB
Start Test
Study First
Subjects
:
certifications
,
istqb
,
it-skills
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]
N-switch coverage
decision coverage
instrumentation
test design tool
2. A development life cycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product - a subset of the final product under dev
scribe
iterative development model
test log
failure
3. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.
test suite
actual outcome
basic block
benchmark test
4. Analysis of source code carried out without execution of that software.
cause-effect graph
static analysis tool
static code analysis
path testing
5. A document identifying test items - their configuration - current status and other delivery information delivered by development to testing - and possibly other stakeholders - at the start of a test execution phase. [After IEEE 829]
severity
release note
suspension criteria
iterative development model
6. The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).
test execution schedule
thread testing
module
risk analysis
7. Comparison of actual and expected results - performed while the software is being executed - for example by a test execution tool.
dynamic comparison
non-functional requirement
cyclomatic complexity
boundary value analysis
8. An environment containing hardware - instrumentation - simulators - software tools - and other support elements needed to conduct a test. [After IEEE 610]
test environment
equivalence partition
multiple condition testing
inspection
9. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]
fault seeding
classification tree method
defect report
review tool
10. Choosing a set of input values to force the execution of a given path.
pass/fail criteria
iterative development model
software
path sensitizing
11. Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives - quality planning - quality control -
decision coverage
performance indicator
quality management
informal review
12. Non fulfillment of a specified requirement. [ISO 9000]
unreachable code
output domain
resource utilization
non-conformity
13. Procedure to derive and/or select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure. See also black box test design technique. Procedure to derive and/or sel
functional test design technique
test execution phase
use case testing
blocked test case
14. The capability of the software product to enable the user to operate and control it. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO
stress testing tool
code-based testing
operability
statement
15. The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.
test level
resumption criteria
accuracy
reviewer
16. An abstract representation of the sequence and possible changes of the state of data objects - where the state of an object is any of: creation - usage - or destruction. [Beizer]
fault seeding
precondition
data flow
dynamic analysis tool
17. A minimal software item that can be tested in isolation.
test data preparation tool
infeasible path
instrumenter
module
18. Coverage measures based on the internal structure of a component or system.
structural coverage
N-switch testing
Capability Maturity Model Integration (CMMI)
orthogonal array
19. A pointer within a web page that leads to other web pages.
multiple condition coverage
hyperlink
regression testing
test oracle
20. An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing. Testing performed to expose defects in the interfaces and in the interactions between integr
installation wizard
functional integration
multiple condition testing
staged representation
21. A black box test design technique in which test cases are designed to execute all possbile discrete combinations of each pair of input parameters. See also orthogonal array testing. A systematic way of testing all-pair combinations of variables using
maintainability
pairwise testing
pointer
incremental development model
22. An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge - for example the minimum or maximum value of a range.
functionality testing
pointer
boundary value
feature
23. A program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.
test condition
statement testing
decision condition coverage
decision
24. A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.
static code analysis
vertical traceability
syntax testing
root cause
25. A procedure to derive and/or select test cases targeted at one or more defect categories - with tests being developed from what is known about the specific defect category. See also defect taxonomy. A system of (hierarchical) categories designed to b
control flow
variable
defect based test design technique
error tolerance
26. A black box test design technique in which test cases are designed to execute business procedures and processes. [TMap] See also procedure testing. Testing aimed at ensuring that the component or system can operate in conjunction with new or existing
classification tree
process cycle test
CASE
testable requirements
27. The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions. [ISO 9126]
volume testing
performance indicator
performance testing tool
functionality
28. An instance of an output. See also output.A variable (whether stored within a component or outside) that is written by a component.
compatibility testing
testware
output value
boundary value
29. A sequence of events - e.g. executable statements - of a component or system from an entry point to an exit point.
path
database integrity testing
test comparison
design-based testing
30. The process of evaluating behavior - e.g. memory performance - CPU usage - of a system or component during execution. [After IEEE 610]
maintenance
expected result
hyperlink
dynamic analysis
31. A variable (whether stored within a component or outside) that is written by a component.
portability testing
output
operational profile
risk
32. A variable (whether stored within a component or outside) that is read by a component.
test case suite
test
input
isolation testing
33. A factor that could result in future negative consequences; usually expressed as impact and likelihood.
output value
test closure
recovery testing
risk
34. An extension of FMEA - as in addition to the basic FMEA - it includes a criticality analysis - which is used to chart the probability of failure modes against the severity of their consequences. The result highlights failure modes with relatively hig
root cause analysis
Failure Mode - Effect and Criticality Analysis (FMECA)
white-box test design technique
test suite
35. The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]
coverage item
functional testing
static analyzer
availability
36. A document summarizing testing activities and results - produced at regular intervals - to report progress of testing activities against a baseline (such as the original test plan) and to communicate risks and alternatives requiring a decision to man
data flow
white-box testing
test progress report
performance testing tool
37. A formula based test estimation method based on function point analysis. [TMap]
Test Point Analysis (TPA)
resumption criteria
operational testing
reviewer
38. A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects) - which can be used to design test cases.
LCSAJ testing
isolation testing
entry point
decision table
39. A tool that provides support for the identification and control of configuration items - their status over changes and versions - and the release of baselines consisting of configuration items.
component integration testing
Test Maturity Model (TMM)
alpha testing
configuration management tool
40. The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface. [ISO 9126] See also reliability - robustness. The ability of the software product
risk level
fault tolerance
actual result
control flow graph
41. A software tool used to carry out instrumentation.
scribe
component
test harness
instrumenter
42. A programming language in which executable test scripts are written - used by a test execution tool (e.g. a capture/playback tool).
interoperability testing
scripting language
measurement scale
user test
43. A scripting technique that uses data files to contain not only test data and expected results - but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control scrip
domain
requirements phase
decision table
keyword driven testing
44. The process of developing and prioritizing test procedures - creating test data and - optionally - preparing test harnesses and writing automated test scripts.
operational acceptance testing
condition determination coverage
test implementation
stub
45. A review characterized by documented procedures and requirements - e.g. inspection.
path
compliance testing
formal review
Test Point Analysis (TPA)
46. A scripting technique that stores test input and expected results in a table or spreadsheet - so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution to
daily build
data driven testing
test control
non-conformity
47. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.
white-box testing
boundary value coverage
test case suite
operational profile testing
48. The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]
reliability testing
replaceability
measurement
integration
49. Tests aimed at showing that a component or system does not work. Negative testing is related to the testers' attitude rather than a specific test approach or test design technique - e.g. testing with invalid input values or exceptions. [After Beizer]
precondition
non-conformity
negative testing
capture/replay tool
50. The capability of the software product to use appropriate amounts and types of resources - for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files - when the software performs its
N-switch testing
resource utilization
software life cycle
attack