SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
ISTQB
Start Test
Study First
Subjects
:
certifications
,
istqb
,
it-skills
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A white box test design technique in which test cases are designed to execute condition outcomes.
Test Point Analysis (TPA)
Failure Mode - Effect and Criticality Analysis (FMECA)
condition testing
process
2. The testing of individual software components. [After IEEE 610]
unit testing
pass
isolation testing
security
3. Testing practice for a project using agile methodologies - such as extreme programming (XP) - treating development as the customer of testing and emphasizing the test first design paradigm. See also test driven development. A way of developing softwa
decision coverage
failure mode
keyword driven testing
agile testing
4. Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange - Internet).
system integration testing
efficiency testing
test objective
LCSAJ testing
5. A tool that facilitates the recording and status tracking of incidents. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of incidents and provide reporting facilities. See also defect manage
monitor
risk analysis
integration
incident management tool
6. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues
availability
elementary comparison testing
walkthrough
decision coverage
7. Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]
test suite
CASE
equivalence partition
independence of testing
8. The ability of the software product to perform its required functions under stated conditions for a specified period of time - or for a specified number of operations. [ISO 9126]
failure mode
reliability
co-existence
system
9. The function to check on the contents of libraries of configuration items - e.g. for standards compliance. [IEEE 610]
functional integration
specified input
re-testing
configuration auditing
10. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]
code-based testing
test design specification
software quality
variable
11. Testing where the system is subjected to large volumes of data. See also resource-utilization testing. The process of testing to determine the resource-utilization of a software product.
quality attribute
volume testing
priority
maintainability testing
12. A pointer within a web page that leads to other web pages.
data driven testing
hyperlink
quality assurance
volume testing
13. A device or storage area used to store data temporarily for differences in rates of data flow - time or occurrence of events - or amounts of data that can be handeld by the devices or processes involved in the transfer or use of the data. [IEEE 610]
driver
buffer
statement testing
code
14. A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers - check pointer arithmetic and to monitor the allocation - use and de-allocation of memory and to flag mem
isolation testing
dynamic analysis tool
error tolerance
resource utilization
15. The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]
thread testing
process
simulation
test data
16. A review not based on a formal (documented) procedure.
qualification
pass/fail criteria
root cause
informal review
17. A development life cycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product - a subset of the final product under dev
iterative development model
audit trail
test
recovery testing
18. An element of configuration management - consisting of the evaluation - co-ordination - approval or disapproval - and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]
Function Point Analysis (FPA)
version control
staged representation
vertical traceability
19. A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.
test execution schedule
off-the-shelf software
test case specification
functional test design technique
20. Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software - as a result of the changes made. It is performed when the software or its environment is c
software quality
structural coverage
regression testing
review
21. The set of generic and specific conditions - agreed upon with the stakeholders - for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding p
test condition
system
exit criteria
attack
22. The set from which valid output values can be selected. See also domain. The set from which valid input and/or output values can be selected.
decision coverage
specified input
output value
output domain
23. The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE 610] See also error-tolerance - fault-tolerance. The ability of a system or component to continue normal o
robustness
unit testing
data flow analysis
equivalence partition
24. A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects) - which can be used to design test cases.
integration testing
requirements phase
cause-effect graph
dynamic analysis
25. The capability of the software product to use appropriate amounts and types of resources - for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files - when the software performs its
milestone
dynamic testing
performance testing
resource utilization
26. A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. [Gilb and Graham - IEEE 1028] See also peer review. A review of a software work product by colleagues of the producer of the product for the p
technical review
oracle
Fault Tree Analysis (FTA)
incremental development model
27. An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).
bespoke software
design-based testing
test case specification
version control
28. A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark) - a user-manual - or an individual's specialized knowledge - but should not be the code. [Afte
milestone
robustness testing
test oracle
hyperlink tool
29. A risk directly related to the test object. See also risk. A factor that could result in future negative consequences; usually expressed as impact and likelihood.
blocked test case
test type
product risk
testable requirements
30. An element of configuration management - consisting of the recording and reporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification - the status of proposed
monkey testing
status accounting
consistency
iterative development model
31. A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.
syntax testing
basic block
interoperability testing
cause-effect graphing
32. A Linear Code Sequence And Jump - consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements - the end of the linear sequence - and the targe
LCSAJ
defect
compiler
test implementation
33. The tracing of requirements for a test level through the layers of test documentation (e.g. test plan - test design specification - test case specification and test procedure specification or test script).
horizontal traceability
root cause
entry point
expected result
34. A reason or purpose for designing and executing a test.
test policy
test objective
Defect Detection Percentage (DDP)
test design technique
35. Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. [Musa]
test procedure specification
orthogonal array testing
resource utilization
operational profile testing
36. A point in time in a project at which defined (intermediate) deliverables and results should be ready.
root cause analysis
Failure Mode - Effect and Criticality Analysis (FMECA)
portability
milestone
37. Multiple heterogeneous - distributed systems that are embedded in networks at multiple levels and in multiple domains interconnected addressing large-scale inter-disciplinary common problems and purposes.
configuration item
test manager
system of systems
test specification
38. An instance of an output. See also output.A variable (whether stored within a component or outside) that is written by a component.
requirements phase
master test plan
output value
exit criteria
39. Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.
bespoke software
requirement
agile testing
risk-based testing
40. The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. [After IEEE 610] See also efficiency. The capability of the software product to provide appropriat
test input
test tool
test charter
performance
41. A procedure to derive and/or select test cases targeted at one or more defect categories - with tests being developed from what is known about the specific defect category. See also defect taxonomy. A system of (hierarchical) categories designed to b
postcondition
priority
development testing
defect based test design technique
42. The percentage of equivalence partitions that have been exercised by a test suite.
software quality
expected result
equivalence partition coverage
defect management tool
43. A white box test design technique in which test cases are designed to execute branches.
Test Process Improvement (TPI)
capture/replay tool
branch testing
test driven development
44. The process of identifying risks using techniques such as brainstorming - checklists and failure history.
static analyzer
simulation
risk identification
risk type
45. A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository - e.g. requirements management tool - from specified test conditions held in the tool itself - or from code.
condition
measurement
test design tool
informal review
46. Testing to determine how the occurrence of two or more activities within the same interval of time - achieved either by interleaving the activities or by simultaneous execution - is handled by the component or system. [After IEEE 610]
Capability Maturity Model Integration (CMMI)
classification tree
concurrency testing
isolation testing
47. A high level metric of effectiveness and/or efficiency used to guide and control progressive test development - e.g. Defect Detection Percentage (DDP).
interoperability
finite state machine
test performance indicator
test logging
48. Coverage measures based on the internal structure of a component or system.
structural coverage
coverage item
cause-effect graphing
master test plan
49. A pointer that references a location that is out of scope for that pointer or that does not exist. See also pointer. A data item that specifies the location of another data item; for example - a data item that specifies the address of the next employ
orthogonal array
beta testing
wild pointer
hyperlink
50. Code that cannot be reached and therefore is impossible to execute.
state table
data definition
test case
unreachable code