Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. An abstract representation of the sequence and possible changes of the state of data objects - where the state of an object is any of: creation - usage - or destruction. [Beizer]






2. A procedure to derive and/or select test cases targeted at one or more defect categories - with tests being developed from what is known about the specific defect category. See also defect taxonomy. A system of (hierarchical) categories designed to b






3. The capability of the software product to be installed in a specified environment [ISO 9126]. See also portability. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]






4. [Beizer] A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






5. The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability. The ease with which t






6. A form of static analysis based on a representation of sequences of events (paths) in the execution through a component or system.






7. The ratio of the number of failures of a given category to a given unit of measure - e.g. failures per unit of time - failures per number of transactions - failures per number of computer runs. [IEEE 610]






8. Any condition that deviates from expectation based on requirements specifications - design documents - user documents - standards - etc. or from someone's perception or experience. Anomalies may be found during - but not limited to - reviewing - test






9. A review not based on a formal (documented) procedure.






10. A form of static analysis based on the definition and usage of variables.






11. A path for which a set of input values and preconditions exists which causes it to be executed.






12. A test is deemed to pass if its actual result matches its expected result.






13. The set from which valid output values can be selected. See also domain. The set from which valid input and/or output values can be selected.






14. Testing performed to expose defects in the interfaces and interaction between integrated components.






15. The tracing of requirements through the layers of development documentation to components.






16. The calculated approximation of a result (e.g. effort spent - completion date - costs involved - number of test cases - etc.) which is usable even if input data may be incomplete - uncertain - or noisy.






17. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]






18. A document summarizing testing activities and results - produced at regular intervals - to report progress of testing activities against a baseline (such as the original test plan) and to communicate risks and alternatives requiring a decision to man






19. A test plan that typically addresses one test phase. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testin






20. The evaluation of a condition to True or False.






21. An uninterrupted period of time spent in executing tests. In exploratory testing - each test session is focused on a charter - but testers can also explore new opportunities or issues during a session. The tester creates and executes test cases on th






22. During the test closure phase of a test process data is collected from completed activities to consolidate experience - testware - facts and numbers. The test closure phase consists of finalizing and archiving the testware and evaluating the test pro






23. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process






24. Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software - as a result of the changes made. It is performed when the software or its environment is c






25. A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository - e.g. requirements management tool - from specified test conditions held in the tool itself - or from code.






26. Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black






27. A development activity where a complete system is compiled and linked every day (usually overnight) - so that a consistent system is available at any time including all latest changes.






28. Acceptance testing by users/customers at their site - to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes - normally including hardware as well as software.






29. The capability of the software product to provide appropriate performance - relative to the amount of resources used under stated conditions. [ISO 9126]






30. A test environment comprised of stubs and drivers needed to execute a test.






31. A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators. See also high level te






32. Testing to determine how the occurrence of two or more activities within the same interval of time - achieved either by interleaving the activities or by simultaneous execution - is handled by the component or system. [After IEEE 610]






33. Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems - database management systems - and other applications.






34. (1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. See also Capability Maturity Model - Test Maturity Model. (2) The capability of the software product to avoid failure as a res






35. The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.






36. The ability to identify related items in documentation and software - such as requirements with associated tests. See also horizontal traceability - vertical traceability. The tracing of requirements for a test level through the layers of test docume






37. A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal] See also decision table. A table showing combinations of inputs and/or stimuli (c






38. The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]






39. A type of integration testing in which software elements - hardware elements - or both are combined all at once into a component or an overall system - rather than in stages. [After IEEE 610] See also integration testing. Testing performed to expose






40. The process of testing to determine the interoperability of a software product. See also functionality testing. The process of testing to determine the functionality of a software product.






41. A white box test design technique in which test cases are designed to execute statements.






42. The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.






43. A type of performance testing conducted to evaluate the behavior of a component or system with increasing load - e.g. numbers of parallel users and/or numbers of transactions - to determine what load can be handled by the component or system. See als






44. A white box test design technique in which test cases are designed to execute condition outcomes.






45. Formal or informal testing conducted during the implementation of a component or system - usually in the development environment by developers. [After IEEE 610]






46. A tool that provides support for the identification and control of configuration items - their status over changes and versions - and the release of baselines consisting of configuration items.






47. The percentage of definition-use pairs that have been exercised by a test suite.






48. Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]






49. Procedure to derive and/or select test cases for nonfunctional testing based on an analysis of the specification of a component or system without reference to its internal structure. See also black box test design technique. Procedure to derive and/o






50. The effect on the component or system by the measurement instrument when the component or system is being measured - e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being