Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Testing to determine the robustness of the software product.






2. The process of recording information about tests executed into a test log.






3. A high level metric of effectiveness and/or efficiency used to guide and control progressive development - e.g. lead-time slip for software development. [CMMI]






4. A black box test design technique in which test cases are designed based on boundary values. See also boundary value. An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either si






5. The consequence/outcome of the execution of a test. It includes outputs to screens - changes to data - reports - and communication messages sent out. See also actual result - expected result. The behavior produced/observed when a component or system






6. The percentage of executable statements that have been exercised by a test suite.






7. A five level staged framework for test process improvement - related to the Capability Maturity Model (CMM) - that describes the key elements of an effective test process.






8. A white box test design technique in which test cases are designed to execute decision outcomes.






9. The set of generic and specific conditions - agreed upon with the stakeholders - for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding p






10. An element of configuration management - consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. [IEEE 610]






11. Testing to determine the extent to which the software product is understood - easy to learn - easy to operate and attractive to the users under specified conditions. [After ISO 9126]






12. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black-box test design technique Procedure to derive and/or select test cases based on an analysis of the specification - either fu






13. An occurrence in which one defect prevents the detection of another. [After IEEE 610]






14. An executable statement where a variable is assigned a value.






15. The testing of individual software components. [After IEEE 610]






16. The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).






17. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]






18. A software tool that translates programs expressed in a high order language into their machine language equivalents. [IEEE 610]






19. A black box test design technique where test cases are selected - possibly using a pseudo-random generation algorithm - to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performa






20. The capability of the software product to provide appropriate performance - relative to the amount of resources used under stated conditions. [ISO 9126]






21. A tree showing equivalence parititions hierarchically ordered - which is used to design test cases in the classification tree method. See also classification tree method. A black box test design technique in which test cases - described by means of a






22. Comparison of actual and expected results - performed while the software is being executed - for example by a test execution tool.






23. A specification or software product that has been formally reviewed or agreed upon - that thereafter serves as the basis for further development - and that can be changed only through a formal change control process. [After IEEE 610]






24. A white box test design technique in which test cases are designed to execute condition outcomes.






25. The result of a decision (which therefore determines the branches to be taken).






26. A tool that provides support for testing security characteristics and vulnerabilities.






27. The process of testing to determine the recoverability of a software product. See also reliability testing. The process of testing to determine the reliability of a software product.






28. A risk related to management and control of the (test) project - e.g. lack of staffing - strict deadlines - changing requirements - etc. See also risk. A factor that could result in future negative consequences; usually expressed as impact and likeli






29. A technique used to characterize the elements of risk. The result of a hazard analysis will drive the methods used for development and testing of a system. See also risk analysis. The process of assessing identified risks to estimate their impact and






30. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






31. The evaluation of a condition to True or False.






32. A software tool or hardware device that runs concurrently with the component or system under test and supervises - records and/or analyses the behavior of the component or system. [After IEEE 610]






33. A document summarizing testing activities and results - produced at regular intervals - to report progress of testing activities against a baseline (such as the original test plan) and to communicate risks and alternatives requiring a decision to man






34. The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.






35. The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.






36. The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out - starting points regarding the test process - the test design






37. The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]






38. A source of a defect such that if it is removed - the occurance of the defect type is decreased or removed. [CMMI]






39. A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]






40. A specific category of risk related to the type of testing that can mitigate (control) that category. For example the risk of user-interactions being misunderstood can be mitigated by usability testing.






41. The capability of the software product to enable the user to understand whether the software is suitable - and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability. The capability of the software to be understo






42. An entity or property used as a basis for test coverage - e.g. equivalence partitions or code statements.






43. A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects) - which can be used to design test cases.






44. Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing - system integration testing. Testing performed to expose defects in the interfaces and int






45. The tracing of requirements for a test level through the layers of test documentation (e.g. test plan - test design specification - test case specification and test procedure specification or test script).






46. A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.






47. A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark) - a user-manual - or an individual's specialized knowledge - but should not be the code. [Afte






48. A white box test design technique in which test cases are designed to execute definition and use pairs of variables.






49. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.






50. Testing that runs test cases that failed the last time they were run - in order to verify the success of corrective actions.