SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
ISTQB
Start Test
Study First
Subjects
:
certifications
,
istqb
,
it-skills
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. An abstract representation of the sequence and possible changes of the state of data objects - where the state of an object is any of: creation - usage - or destruction. [Beizer]
branch
scalability testing
operability
data flow
2. A procedure to derive and/or select test cases targeted at one or more defect categories - with tests being developed from what is known about the specific defect category. See also defect taxonomy. A system of (hierarchical) categories designed to b
operational profile testing
maintenance testing
input value
defect based test design technique
3. The capability of the software product to be installed in a specified environment [ISO 9126]. See also portability. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]
component specification
random testing
high level test case
installability
4. [Beizer] A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
test target
statement testing
invalid testing
partition testing
5. The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability. The ease with which t
black-box testing
adaptability
changeability
reliability growth model
6. A form of static analysis based on a representation of sequences of events (paths) in the execution through a component or system.
input value
control flow analysis
elementary comparison testing
state transition
7. The ratio of the number of failures of a given category to a given unit of measure - e.g. failures per unit of time - failures per number of transactions - failures per number of computer runs. [IEEE 610]
measure
vertical traceability
failure rate
integration testing
8. Any condition that deviates from expectation based on requirements specifications - design documents - user documents - standards - etc. or from someone's perception or experience. Anomalies may be found during - but not limited to - reviewing - test
data definition
intake test
quality
anomaly
9. A review not based on a formal (documented) procedure.
informal review
test execution
definition-use pair
test comparison
10. A form of static analysis based on the definition and usage of variables.
test data preparation tool
intake test
data flow analysis
best practice
11. A path for which a set of input values and preconditions exists which causes it to be executed.
modelling tool
high level test case
intake test
feasible path
12. A test is deemed to pass if its actual result matches its expected result.
formal review
condition outcome
pass
scripting language
13. The set from which valid output values can be selected. See also domain. The set from which valid input and/or output values can be selected.
installation wizard
output domain
pseudo-random
configuration management tool
14. Testing performed to expose defects in the interfaces and interaction between integrated components.
defect report
output domain
component integration testing
reliability
15. The tracing of requirements through the layers of development documentation to components.
resource utilization testing
exception handling
vertical traceability
maintenance testing
16. The calculated approximation of a result (e.g. effort spent - completion date - costs involved - number of test cases - etc.) which is usable even if input data may be incomplete - uncertain - or noisy.
configuration management
root cause
test estimation
quality attribute
17. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]
requirement
maintenance testing
component testing
maintenance
18. A document summarizing testing activities and results - produced at regular intervals - to report progress of testing activities against a baseline (such as the original test plan) and to communicate risks and alternatives requiring a decision to man
safety critical system
test progress report
requirement
path sensitizing
19. A test plan that typically addresses one test phase. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testin
non-functional test design techniques
maintainability
phase test plan
condition coverage
20. The evaluation of a condition to True or False.
daily build
stress testing tool
fault tolerance
condition outcome
21. An uninterrupted period of time spent in executing tests. In exploratory testing - each test session is focused on a charter - but testers can also explore new opportunities or issues during a session. The tester creates and executes test cases on th
test session
incident logging
stress testing tool
boundary value coverage
22. During the test closure phase of a test process data is collected from completed activities to consolidate experience - testware - facts and numbers. The test closure phase consists of finalizing and archiving the testware and evaluating the test pro
Function Point Analysis (FPA)
feature
expected result
test closure
23. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process
Failure Mode - Effect and Criticality Analysis (FMECA)
capture/playback tool
beta testing
formal review
24. Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software - as a result of the changes made. It is performed when the software or its environment is c
acceptance testing
pair testing
regression testing
installation guide
25. A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository - e.g. requirements management tool - from specified test conditions held in the tool itself - or from code.
test phase
test design tool
syntax testing
component specification
26. Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black
functional testing
negative testing
status accounting
state transition testing
27. A development activity where a complete system is compiled and linked every day (usually overnight) - so that a consistent system is available at any time including all latest changes.
incremental development model
test procedure specification
daily build
risk management
28. Acceptance testing by users/customers at their site - to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes - normally including hardware as well as software.
functional integration
site acceptance testing
cyclomatic complexity
measure
29. The capability of the software product to provide appropriate performance - relative to the amount of resources used under stated conditions. [ISO 9126]
invalid testing
load testing
simulation
efficiency
30. A test environment comprised of stubs and drivers needed to execute a test.
test harness
test condition
test case
test specification
31. A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators. See also high level te
audit trail
low level test case
off-the-shelf software
data flow
32. Testing to determine how the occurrence of two or more activities within the same interval of time - achieved either by interleaving the activities or by simultaneous execution - is handled by the component or system. [After IEEE 610]
product risk
stub
concurrency testing
incremental testing
33. Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems - database management systems - and other applications.
resumption criteria
operational environment
functionality testing
deliverable
34. (1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. See also Capability Maturity Model - Test Maturity Model. (2) The capability of the software product to avoid failure as a res
maturity
vertical traceability
Capability Maturity Model Integration (CMMI)
management review
35. The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.
decision condition coverage
bottom-up testing
module
Test Point Analysis (TPA)
36. The ability to identify related items in documentation and software - such as requirements with associated tests. See also horizontal traceability - vertical traceability. The tracing of requirements for a test level through the layers of test docume
test planning
traceability
functionality
safety critical system
37. A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal] See also decision table. A table showing combinations of inputs and/or stimuli (c
learnability
usability testing
driver
decision table testing
38. The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]
scalability
benchmark test
simulation
process improvement
39. A type of integration testing in which software elements - hardware elements - or both are combined all at once into a component or an overall system - rather than in stages. [After IEEE 610] See also integration testing. Testing performed to expose
test evaluation report
decision table testing
big-bang testing
defect report
40. The process of testing to determine the interoperability of a software product. See also functionality testing. The process of testing to determine the functionality of a software product.
bespoke software
multiple condition testing
interoperability testing
test environment
41. A white box test design technique in which test cases are designed to execute statements.
output domain
hyperlink
statement testing
test execution phase
42. The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.
test item
system
boundary value analysis
use case testing
43. A type of performance testing conducted to evaluate the behavior of a component or system with increasing load - e.g. numbers of parallel users and/or numbers of transactions - to determine what load can be handled by the component or system. See als
resumption criteria
load testing
basic block
compound condition
44. A white box test design technique in which test cases are designed to execute condition outcomes.
condition testing
test policy
static analyzer
white-box testing
45. Formal or informal testing conducted during the implementation of a component or system - usually in the development environment by developers. [After IEEE 610]
development testing
decision outcome
functional test design technique
path sensitizing
46. A tool that provides support for the identification and control of configuration items - their status over changes and versions - and the release of baselines consisting of configuration items.
configuration management tool
basis test set
condition outcome
memory leak
47. The percentage of definition-use pairs that have been exercised by a test suite.
specification
error tolerance
incident management tool
data flow coverage
48. Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]
exception handling
walkthrough
maintainability
pass/fail criteria
49. Procedure to derive and/or select test cases for nonfunctional testing based on an analysis of the specification of a component or system without reference to its internal structure. See also black box test design technique. Procedure to derive and/o
multiple condition coverage
non-functional test design techniques
test design tool
input domain
50. The effect on the component or system by the measurement instrument when the component or system is being measured - e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being
probe effect
stress testing tool
scalability testing
output