Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






2. A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.






3. The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO






4. A human action that produces an incorrect result. [After IEEE 610]






5. The set of generic and specific conditions - agreed upon with the stakeholders - for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding p






6. The ease with which a software product can be modified to correct defects - modified to meet new requirements - modified to make future maintenance easier - or adapted to a changed environment. [ISO 9126]






7. A pointer within a web page that leads to other web pages.






8. The level of (business) importance assigned to an item - e.g. defect.






9. The percentage of definition-use pairs that have been exercised by a test suite.






10. Choosing a set of input values to force the execution of a given path.






11. The percentage of executable statements that have been exercised by a test suite.






12. The total costs incurred on quality activities and issues and often split into prevention costs - appraisal costs - internal failure costs and external failure costs.






13. A tool that provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides other support for the developer - such as debugging capabilities. [Graham]






14. A set of input values - execution preconditions - expected results and execution postconditions - developed for a particular objective or test condition - such as to exercise a particular program path or to verify compliance with a specific requireme






15. The set from which valid input values can be selected. See also domain. The set from which valid input and/or output values can be selected.






16. An attribute of a test indicating whether the same results are produced each time the test is executed.






17. A transition between two states of a component or system.






18. A test plan that typically addresses multiple test levels. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the






19. A subset of all defined/planned test cases that cover the main functionality of a component or system - to ascertaining that the most crucial functions of a program work - but not bothering with finer details. A daily build and smoke test is among in






20. A tool that carries out static analysis.






21. A test approach in which the test suite comprises all combinations of input values and preconditions.






22. The capability of the software product to enable specified modifications to be implemented. [ISO 9126] See also maintainability. The ease with which a software product can be modified to correct defects - modified to meet new requirements - modified






23. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]






24. An incremental approach to integration testing where the component at the top of the component hierarchy is tested first - with lower level components being simulated by stubs. Tested components are then used to test lower level components. The proce






25. A test whereby real-life users are involved to evaluate the usability of a component or system.






26. Testing of a component or system at specification or implementation level without execution of that software - e.g. reviews or static code analysis.






27. Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. [Musa]






28. The capability of the software product to enable the user to understand whether the software is suitable - and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability. The capability of the software to be understo






29. A five level staged framework for test process improvement - related to the Capability Maturity Model (CMM) - that describes the key elements of an effective test process.






30. Testing conducted to evaluate a component or system in its operational environment. [IEEE 610]






31. Analysis of software artifacts - e.g. requirements or code - carried out without execution of these software artifacts.






32. An independent evaluation of software products or processes to ascertain compliance to standards - guidelines - specifications - and/or procedures based on objective criteria - including documents that specify: (1) the form or content of the products






33. A feature or characteristic that affects an item's quality. [IEEE 610]






34. A special instance of a smoke test to decide if the component or system is ready for detailed and further testing. An intake test is typically carried out at the start of the test execution phase. See also smoke test. A subset of all defined/planned






35. The component or system to be tested. See also test item. The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.






36. A set of one or more test cases. [IEEE 829]






37. A grid showing the resulting transitions for each state combined with each possible event - showing both valid and invalid transitions.






38. The process of testing to determine the efficiency of a software product.






39. The process of developing and prioritizing test procedures - creating test data and - optionally - preparing test harnesses and writing automated test scripts.






40. A description of a component's function in terms of its output values for specified input values under specified conditions - and required non-functional behavior (e.g. resource utilization).






41. The process of testing to determine the interoperability of a software product.






42. A white box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decision outcome.






43. The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.






44. An element of configuration management - consisting of the recording and reporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification - the status of proposed






45. A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning - engineering and managing product development and maintenance. CMMI






46. Testware used in automated testing - such as tool scripts.






47. The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]






48. Comparison of actual and expected results - performed while the software is being executed - for example by a test execution tool.






49. An attribute of a component or system specified or implied by requirements documentation (for example reliability - usability or design constraints). [After IEEE 1008]






50. An approach to testing to reduce the level of product risks and inform stakeholders on their status - starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.