SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
ISTQB
Start Test
Study First
Subjects
:
certifications
,
istqb
,
it-skills
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A pointer that references a location that is out of scope for that pointer or that does not exist. See also pointer. A data item that specifies the location of another data item; for example - a data item that specifies the address of the next employ
formal review
scripted testing
wild pointer
maintainability testing
2. The process of testing to determine the compliance of the component or system.
staged representation
software life cycle
compliance testing
ttractiveness
3. [Beizer] A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
CASE
test data preparation tool
N-switch testing
partition testing
4. The capability of the software product to enable modified software to be tested. [ISO 9126] See also maintainability. The ease with which a software product can be modified to correct defects - modified to meet new requirements - modified to make fut
defect management tool
statement testing
defect report
testability
5. A high level metric of effectiveness and/or efficiency used to guide and control progressive development - e.g. lead-time slip for software development. [CMMI]
static analysis tool
performance indicator
functional integration
random testing
6. The tracing of requirements through the layers of development documentation to components.
Function Point Analysis (FPA)
output
vertical traceability
emulator
7. A device - computer program or system used during testing - which behaves or operates like a given system when provided with a set of controlled inputs. [After IEEE 610 - DO178b] See also emulator. A device - computer program - or system that accepts
complexity
simulator
test data preparation tool
test schedule
8. The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing. Testing based on an analysis of the specification of the functionality of a compo
accuracy
blocked test case
database integrity testing
security
9. The first executable statement within a component.
incident logging
data definition
equivalence partition coverage
entry point
10. A sequence of executable statements within a component.
capture/playback tool
subpath
LCSAJ
Test Maturity Model (TMM)
11. The consequence/outcome of the execution of a test. It includes outputs to screens - changes to data - reports - and communication messages sent out.
entry criteria
safety critical system
co-existence
outcome
12. A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning - engineering and managing product development and maintenance. CMMI
postcondition
maintainability testing
Capability Maturity Model Integration (CMMI)
classification tree
13. A model that shows the growth in reliability over time during continuous testing of a component or system as a result of the removal of defects that result in reliability failures.
outcome
static code analysis
reliability growth model
subpath
14. Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]
system testing
decision
test specification
independence of testing
15. A pointer within a web page that leads to other web pages.
test specification technique
branch coverage
test design
hyperlink
16. A type of test tool that is able to execute other software using an automated test script - e.g. capture/playback. [Fewster and Graham]
test execution tool
production acceptance testing
pass
defect density
17. A logical expression that can be evaluated as True or False - e.g. A>B. See also test condition. An item or event of a component or system that could be verified by one or more test cases - e.g. a function - transaction - feature - quality attribute
test design technique
precondition
condition
static analysis tool
18. A tool for seeding (i.e. intentionally inserting) faults in a component or system.
fault seeding tool
acceptance testing
data flow testing
path testing
19. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n
inspection
test control
version control
user acceptance testing
20. A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]
data flow analysis
defect report
test object
severity
21. The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.
error tolerance
data flow coverage
condition determination coverage
path
22. Comparison of actual and expected results - performed while the software is being executed - for example by a test execution tool.
hazard analysis
dynamic comparison
load profile
audit trail
23. The process of transforming general testing objectives into tangible test conditions and test cases.
load profile
informal review
test design
changeability
24. All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure - then the test basis is called a frozen t
risk level
test basis
security testing
configuration item
25. Testing the quality of the documentation - e.g. user guide or installation guide.
feature
high level test case
test closure
documentation testing
26. The process of evaluating behavior - e.g. memory performance - CPU usage - of a system or component during execution. [After IEEE 610]
test data preparation tool
installation wizard
dynamic analysis
boundary value
27. The response of a component or system to a set of input values and preconditions.
interface testing
test data preparation tool
decision table
behavior
28. The ratio of the number of failures of a given category to a given unit of measure - e.g. failures per unit of time - failures per number of transactions - failures per number of computer runs. [IEEE 610]
test target
test oracle
branch coverage
failure rate
29. Attributes of software products that bear on its ability to prevent unauthorized access - whether accidental or deliberate - to programs and data. [ISO 9126] See also functionality. The capability of the software product to provide functions which me
test item
load testing
security
operational profile testing
30. A tool used by programmers to reproduce failures - investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step - to halt a program at any program statement and to set and examine
debugging tool
test planning
learnability
quality management
31. Testing carried out informally; no formal test preparation takes place - no recognized test design technique is used - there are no expectations for results and arbitrariness guides the test execution activity.
monitor
ad hoc testing
driver
safety critical system
32. A capability maturity model structure wherein capability levels provide a recommended order for approaching process improvement within specified process areas. [CMMI]
ttractiveness
decision
continuous representation
learnability
33. Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems - database management systems - and other applications.
iterative development model
operational environment
scripting language
daily build
34. An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge - for example the minimum or maximum value of a range.
actual result
multiple condition testing
boundary value
test suite
35. The capability of the software product to enable the user to understand whether the software is suitable - and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability. The capability of the software to be understo
understandability
milestone
test tool
operational testing
36. An attribute of a test indicating whether the same results are produced each time the test is executed.
test reproduceability
error tolerance
finite state machine
simulator
37. A statement which - when compiled - is translated into object code - and which will be executed procedurally when the program is running and may perform an action on data.
risk
executable statement
test run
state table
38. Two or more single conditions joined by means of a logical operator (AND - OR or XOR) - e.g. 'A>B AND C>1000'.
compound condition
incident logging
monkey testing
blocked test case
39. The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]
recoverability
quality
intake test
N-switch testing
40. A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).
review
defect management tool
test strategy
component
41. An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [After Bach]
suitability
exploratory testing
hyperlink
off-the-shelf software
42. A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.
performance profiling
exception handling
configuration control
test evaluation report
43. A test approach in which the test suite comprises all combinations of input values and preconditions.
path testing
exhaustive testing
V-model
test specification technique
44. A sequence of events - e.g. executable statements - of a component or system from an entry point to an exit point.
statistical testing
audit
path
exit criteria
45. The function to check on the contents of libraries of configuration items - e.g. for standards compliance. [IEEE 610]
configuration auditing
configuration
data definition
technical review
46. Computer programs - procedures - and possibly associated documentation and data pertaining to the operation of a computer system. [IEEE 610]
cost of quality
software
replaceability
recovery testing
47. A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution - response time measu
recoverability
entry criteria
performance testing tool
cause-effect graph
48. A source of a defect such that if it is removed - the occurance of the defect type is decreased or removed. [CMMI]
false-pass result
behavior
feature
root cause
49. An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing. Testing performed to expose defects in the interfaces and in the interactions between integr
Defect Detection Percentage (DDP)
non-functional test design techniques
functional integration
probe effect
50. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.
volume testing
exercised
coverage
test comparison