SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
ISTQB
Start Test
Study First
Subjects
:
certifications
,
istqb
,
it-skills
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements - including the constraints of time - cost and resources. [ISO 9000]
operational testing
daily build
project
compatibility testing
2. The capability of the software product to be installed in a specified environment [ISO 9126]. See also portability. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]
walkthrough
stress testing tool
elementary comparison testing
installability
3. An attribute of a component or system specified or implied by requirements documentation (for example reliability - usability or design constraints). [After IEEE 1008]
functionality testing
master test plan
feature
risk
4. A human action that produces an incorrect result. [After IEEE 610]
level test plan
interoperability
error
measurement scale
5. The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]
N-switch coverage
hyperlink tool
mutation analysis
V-model
6. A black box test design technique in which test cases are designed to execute user scenarios.
static analysis tool
load profile
scenario testing
reviewer
7. A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects) - which can be used to design test cases.
usability
test policy
Failure Mode and Effect Analysis (FMEA)
cause-effect graph
8. Comparison of actual and expected results - performed after the software has finished running.
post-execution comparison
fault seeding tool
version control
exit point
9. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]
software
efficiency testing
installability testing
usability
10. A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
error tolerance
acceptance testing
equivalence partitioning
test execution technique
11. A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test - integration test - system test and acceptance test. [After TMap]
acceptance testing
functionality
test level
system integration testing
12. A test plan that typically addresses one test phase. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testin
hyperlink
phase test plan
informal review
output
13. The capability of the software product to enable specified modifications to be implemented. [ISO 9126] See also maintainability. The ease with which a software product can be modified to correct defects - modified to meet new requirements - modified
COTS
changeability
input domain
decision table
14. A statement which - when compiled - is translated into object code - and which will be executed procedurally when the program is running and may perform an action on data.
data definition
incident management tool
executable statement
test summary report
15. The ability of the software product to perform its required functions under stated conditions for a specified period of time - or for a specified number of operations. [ISO 9126]
test summary report
black-box test design technique
static analysis tool
reliability
16. Testing the methods and processes used to access and manage the data(base) - to ensure access methods - processes and data rules function as expected and that during access to the database - data is not corrupted or unexpectedly deleted - updated or
output value
CAST
result
database integrity testing
17. A test is deemed to fail if its actual result does not match its expected result.
measurement
test case specification
fail
input value
18. The capability of the software product to be attractive to the user. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]
pass
ttractiveness
domain
user test
19. The set of generic and specific conditions - agreed upon with the stakeholders - for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding p
design-based testing
exit criteria
entry point
debugging tool
20. Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]
pass/fail criteria
coverage tool
variable
test execution automation
21. A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available. See also low level test case. A test case with concrete (
functionality testing
high level test case
ad hoc testing
static testing
22. A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of defects and provide reporting facilities. See also incid
dynamic analysis
test level
suspension criteria
defect management tool
23. A scripting technique that uses data files to contain not only test data and expected results - but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control scrip
suitability
test manager
keyword driven testing
incident
24. The importance of a risk as defined by its characteristics impact and likelihood. The level of risk can be used to determine the intensity of testing to be performed. A risk level can be expressed either qualitatively (e.g. high - medium - low) or qu
Test Process Improvement (TPI)
risk level
hyperlink
simulation
25. A software tool used to carry out instrumentation.
test implementation
instrumenter
test design tool
test scenario
26. A procedure to derive and/or select test cases targeted at one or more defect categories - with tests being developed from what is known about the specific defect category. See also defect taxonomy. A system of (hierarchical) categories designed to b
business process-based testing
pair testing
risk control
defect based test design technique
27. Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange - Internet).
off-the-shelf software
pass
system integration testing
adaptability
28. The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability. The ease with which the software product can be transferred from one hardw
replaceability
configuration item
white-box testing
risk
29. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process
beta testing
requirements-based testing
test phase
test environment
30. An attribute of a test indicating whether the same results are produced each time the test is executed.
version control
test reproduceability
recoverability testing
requirements management tool
31. A device - computer program - or system that accepts the same inputs and produces the same outputs as a given system. [IEEE 610] See also simulator. A device - computer program or system used during testing - which behaves or operates like a given sy
root cause
emulator
risk-based testing
test type
32. A type of test tool that enables data to be selected from existing databases or created - generated - manipulated and edited for use in testing.
actual outcome
postcondition
test data preparation tool
test script
33. A system whose failure or malfunction may result in death or serious injury to people - or loss or severe damage to equipment - or environmental harm.
traceability
branch coverage
safety critical system
static code analysis
34. The period of time in the software life cycle during which the requirements for a software product are defined and documented. [IEEE 610]
configuration auditing
functionality testing
requirements phase
data flow testing
35. The behavior predicted by the specification - or another source - of the component or system under specified conditions.
test driven development
expected result
automated testware
failure rate
36. A model structure wherein attaining the goals of a set of process areas establishes a maturity level; each level builds a foundation for subsequent levels. [CMMI]
staged representation
regression testing
static analysis
test summary report
37. The process of testing the installability of a software product. See also portability testing. The process of testing to determine the portability of a software product.
installability testing
version control
inspection
negative testing
38. A point in time in a project at which defined (intermediate) deliverables and results should be ready.
milestone
dynamic analysis
subpath
decision condition coverage
39. An instance of an input. See also input. A variable (whether stored within a component or outside) that is read by a component.
basis test set
data flow coverage
input value
design-based testing
40. An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [After Bach]
daily build
exploratory testing
metric
configuration item
41. A white box test design technique in which test cases are designed to execute decision outcomes.
consistency
decision testing
exit point
decision coverage
42. Testing conducted to evaluate a component or system in its operational environment. [IEEE 610]
validation
operational testing
exhaustive testing
exploratory testing
43. The set from which valid output values can be selected. See also domain. The set from which valid input and/or output values can be selected.
incident management tool
decision condition testing
output domain
independence of testing
44. Procedure to derive and/or select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure. See also black box test design technique. Procedure to derive and/or sel
benchmark test
functional test design technique
coverage
partition testing
45. An uninterrupted period of time spent in executing tests. In exploratory testing - each test session is focused on a charter - but testers can also explore new opportunities or issues during a session. The tester creates and executes test cases on th
hazard analysis
audit trail
equivalence partition
test session
46. A systematic evaluation of software acquisition - supply - development - operation - or maintenance process - performed by or on behalf of management that monitors progress - determines the status of plans and schedules - confirms requirements and th
component integration testing
management review
security testing tool
test automation
47. The process of testing to determine the maintainability of a software product.
false-pass result
maintainability testing
functional testing
test scenario
48. A risk related to management and control of the (test) project - e.g. lack of staffing - strict deadlines - changing requirements - etc. See also risk. A factor that could result in future negative consequences; usually expressed as impact and likeli
hazard analysis
test specification technique
project risk
configuration auditing
49. The percentage of executable statements that have been exercised by a test suite.
N-switch coverage
statement coverage
black-box test design technique
use case testing
50. A path that cannot be exercised by any set of possible input values.
test level
level test plan
infeasible path
defect masking