Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The process of confirming that a component - system or person complies with its specified requirements - e.g. by passing an exam.






2. Two persons - e.g. two testers - a developer and a tester - or an end-user and a tester - working together to find defects. Typically - they share one computer and trade control of it while testing.






3. A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution - response time measu






4. A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]






5. The period of time in the software life cycle during which the requirements for a software product are defined and documented. [IEEE 610]






6. A black box test design technique in which test cases are designed based on boundary values. See also boundary value. An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either si






7. The process of testing to determine the recoverability of a software product. See also reliability testing. The process of testing to determine the reliability of a software product.






8. A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]






9. An executable statement where a variable is assigned a value.






10. Testing based on an analysis of the internal structure of the component or system.






11. The set of generic and specific conditions for permitting a process to go forward with a defined task - e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort n






12. A form of static analysis based on a representation of sequences of events (paths) in the execution through a component or system.






13. A variable (whether stored within a component or outside) that is read by a component.






14. Tests aimed at showing that a component or system does not work. Negative testing is related to the testers' attitude rather than a specific test approach or test design technique - e.g. testing with invalid input values or exceptions. [After Beizer]






15. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






16. An instance of an output. See also output.A variable (whether stored within a component or outside) that is written by a component.






17. A high level metric of effectiveness and/or efficiency used to guide and control progressive test development - e.g. Defect Detection Percentage (DDP).






18. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process






19. A tool that provides support for testing security characteristics and vulnerabilities.






20. The capability of the software product to provide appropriate performance - relative to the amount of resources used under stated conditions. [ISO 9126]






21. A logical expression that can be evaluated as True or False - e.g. A>B. See also test condition. An item or event of a component or system that could be verified by one or more test cases - e.g. a function - transaction - feature - quality attribute






22. A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. [Gilb and Graham - IEEE 1028] See also peer review. A review of a software work product by colleagues of the producer of the product for the p






23. The process of testing to determine the maintainability of a software product.






24. A source of a defect such that if it is removed - the occurance of the defect type is decreased or removed. [CMMI]






25. A development life cycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product - a subset of the final product under dev






26. A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]






27. Testing to determine the ease by which users with disabilities can use a component or system. [Gerrard]






28. Testing practice for a project using agile methodologies - such as extreme programming (XP) - treating development as the customer of testing and emphasizing the test first design paradigm. See also test driven development. A way of developing softwa






29. The use of software to perform or support test activities - e.g. test management - test design - test execution and results checking.






30. Testing to determine the safety of a software product.






31. The process of testing to determine the maintainability of a software product.






32. A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract - standard - specification - or other formally imposed document. [After IEEE 610






33. Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. [Musa]






34. A white box test design technique in which test cases are designed to execute branches.






35. Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]






36. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]






37. The process of testing to determine the functionality of a software product.






38. A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads - or with reduced availability of resources such as access to memory or servers. [After IEEE 610] See also pe






39. Analysis of software artifacts - e.g. requirements or code - carried out without execution of these software artifacts.






40. The response of a component or system to a set of input values and preconditions.






41. Testing where the system is subjected to large volumes of data. See also resource-utilization testing. The process of testing to determine the resource-utilization of a software product.






42. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]






43. A risk directly related to the test object. See also risk. A factor that could result in future negative consequences; usually expressed as impact and likelihood.






44. The consequence/outcome of the execution of a test. It includes outputs to screens - changes to data - reports - and communication messages sent out.






45. A software tool that translates programs expressed in a high order language into their machine language equivalents. [IEEE 610]






46. The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]






47. A technique used to analyze the causes of faults (defects). The technique visually models how logical relationships between failures - human errors - and external events can combine to cause specific faults to disclose.






48. A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal] See also decision table. A table showing combinations of inputs and/or stimuli (c






49. Testing of individual components in isolation from surrounding components - with surrounding components being simulated by stubs and drivers - if needed.






50. Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.