SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
ISTQB
Start Test
Study First
Subjects
:
certifications
,
istqb
,
it-skills
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.
risk identification
test manager
equivalence partition coverage
condition coverage
2. A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements - including the constraints of time - cost and resources. [ISO 9000]
test target
statement testing
project
stub
3. The process of testing to determine the interoperability of a software product. See also functionality testing. The process of testing to determine the functionality of a software product.
monitor
low level test case
documentation testing
interoperability testing
4. A document summarizing testing activities and results - produced at regular intervals - to report progress of testing activities against a baseline (such as the original test plan) and to communicate risks and alternatives requiring a decision to man
N-switch coverage
decision coverage
test progress report
scalability
5. A tool that provides support for the identification and control of configuration items - their status over changes and versions - and the release of baselines consisting of configuration items.
configuration management tool
thread testing
test comparison
fault seeding tool
6. A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of defects and provide reporting facilities. See also incid
debugging
test case specification
defect management tool
operational acceptance testing
7. Artifacts produced during the test process required to plan - design - and execute tests - such as documentation - scripts - inputs - expected results - set-up and clear-up procedures - files - databases - environment - and any additional software or
low level test case
testware
boundary value coverage
test type
8. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]
portability
interoperability testing
retrospective meeting
error guessing
9. A black box test design technique where test cases are selected - possibly using a pseudo-random generation algorithm - to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performa
random testing
test
failure rate
measurement scale
10. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.
low level test case
security tool
validation
test suite
11. The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing. Testing based on an analysis of the specification of the functionality of a compo
accuracy
retrospective meeting
N-switch coverage
basic block
12. Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so - which test cases are needed.
level test plan
stress testing
scenario testing
coverage analysis
13. Testing aimed at ensuring that the component or system can operate in conjunction with new or existing users' business procedures or operational procedures.
procedure testing
compatibility testing
capture/playback tool
intake test
14. A logical expression that can be evaluated as True or False - e.g. A>B. See also test condition. An item or event of a component or system that could be verified by one or more test cases - e.g. a function - transaction - feature - quality attribute
decision condition testing
concurrency testing
condition
basis test set
15. The process consisting of all life cycle activities - both static and dynamic - concerned with planning - preparation and evaluation of software products and related work products to determine that they satisfy specified requirements - to demonstrate
test item
incident management
test approach
testing
16. A black box test design technique in which test cases are designed to execute all possbile discrete combinations of each pair of input parameters. See also orthogonal array testing. A systematic way of testing all-pair combinations of variables using
operational environment
executable statement
pairwise testing
fault tolerance
17. The person responsible for project management of testing activities and resources - and evaluation of a test object. The individual who directs - controls - administers - plans and regulates the evaluation of a test object.
test manager
back-to-back testing
availability
iterative development model
18. A set of input values - execution preconditions - expected results and execution postconditions - developed for a particular objective or test condition - such as to exercise a particular program path or to verify compliance with a specific requireme
test evaluation report
test case
test type
staged representation
19. Testing carried out informally; no formal test preparation takes place - no recognized test design technique is used - there are no expectations for results and arbitrariness guides the test execution activity.
safety
precondition
ad hoc testing
incident management tool
20. A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract - standard - specification - or other formally imposed document. [After IEEE 610
consistency
Failure Mode - Effect and Criticality Analysis (FMECA)
infeasible path
requirement
21. A test plan that typically addresses one test level. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testin
level test plan
test tool
error guessing
documentation testing
22. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.
statistical testing
learnability
test input
decision condition testing
23. A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark) - a user-manual - or an individual's specialized knowledge - but should not be the code. [Afte
test oracle
hyperlink tool
risk identification
fail
24. A risk directly related to the test object. See also risk. A factor that could result in future negative consequences; usually expressed as impact and likelihood.
product risk
COTS
audit trail
validation
25. Testing to determine how the occurrence of two or more activities within the same interval of time - achieved either by interleaving the activities or by simultaneous execution - is handled by the component or system. [After IEEE 610]
fault seeding
concurrency testing
installation guide
test approach
26. A flaw in a component or system that can cause the component or system to fail to perform its required function - e.g. an incorrect statement or data definition. A defect - if encountered during execution - may cause a failure of the component or sys
consistency
defect
compliance testing
defect report
27. A skilled professional who is involved in the testing of a component or system.
keyword driven testing
testing
tester
maintenance
28. The consequence/outcome of the execution of a test. It includes outputs to screens - changes to data - reports - and communication messages sent out. See also actual result - expected result. The behavior produced/observed when a component or system
instrumenter
state diagram
result
defect masking
29. A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning - engineering and managing product development and maintenance. CMMI
input value
outcome
non-functional testing
Capability Maturity Model Integration (CMMI)
30. A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.
resource utilization
defect taxonomy
non-conformity
test item
31. A document specifying a set of test cases (objective - inputs - test actions - expected results - and execution preconditions) for a test item. [After IEEE 829]
defect tracking tool
output domain
test case specification
recoverability
32. The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]
test design specification
static code analyzer
defect density
simulation
33. A minimal software item that can be tested in isolation.
unit
result
branch testing
test execution
34. A tool that carries out static analysis.
decision coverage
decision table
decision condition coverage
static analyzer
35. An element of configuration management - consisting of the evaluation - co-ordination - approval or disapproval - and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]
configuration control
specified input
emulator
quality
36. The consequence/outcome of the execution of a test. It includes outputs to screens - changes to data - reports - and communication messages sent out.
test data
changeability
state transition testing
outcome
37. The capability of the software product to provide appropriate performance - relative to the amount of resources used under stated conditions. [ISO 9126]
buffer
efficiency
static analysis
negative testing
38. The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.
defect report
reviewer
test condition
decision outcome
39. A data item that specifies the location of another data item; for example - a data item that specifies the address of the next employee record to be processed. [IEEE 610]
usability testing
pointer
deliverable
scribe
40. A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]
exception handling
procedure testing
driver
maintainability
41. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]
beta testing
fault seeding
pass
static analyzer
42. The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]
best practice
modelling tool
availability
production acceptance testing
43. A tool for seeding (i.e. intentionally inserting) faults in a component or system.
fault seeding tool
non-functional testing
continuous representation
test object
44. Testing of software used to convert data from existing systems for use in replacement systems.
availability
conversion testing
test execution
orthogonal array testing
45. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]
software quality
anomaly
scripted testing
feasible path
46. The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]
stress testing
re-testing
decision testing
testable requirements
47. The ratio of the number of failures of a given category to a given unit of measure - e.g. failures per unit of time - failures per number of transactions - failures per number of computer runs. [IEEE 610]
buffer
low level test case
orthogonal array testing
failure rate
48. A black box test design technique in which test cases are designed based on boundary values. See also boundary value. An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either si
scenario testing
production acceptance testing
boundary value analysis
root cause
49. A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management. The planning - estimating - monitoring and co
portability
operability
maintainability testing
test control
50. The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied needs when the sof
test estimation
data definition
interoperability
test execution phase