Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Execution of a test on a specific version of the test object.






2. A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.






3. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






4. The process of transforming general testing objectives into tangible test conditions and test cases.






5. Two persons - e.g. two testers - a developer and a tester - or an end-user and a tester - working together to find defects. Typically - they share one computer and trade control of it while testing.






6. A program element is said to be exercised by a test case when the input value causes the execution of that element - such as a statement - decision - or other structural element.






7. Comparison of actual and expected results - performed after the software has finished running.






8. Coverage measures based on the internal structure of a component or system.






9. A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made - and to design tests specifically to expose them.






10. The capability of the software product to avoid unexpected effects from modifications in the software. [ISO 9126] See also maintainability. The ease with which a software product can be modified to correct defects - modified to meet new requirements






11. The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms - e.g. lines-of-code - number of classes or function points).






12. Testing using input values that should be rejected by the component or system. See also error tolerance. The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].






13. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






14. A requirement that specifies a function that a component or system must perform. [IEEE 610]






15. A development activity where a complete system is compiled and linked every day (usually overnight) - so that a consistent system is available at any time including all latest changes.






16. The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]






17. A tool used by programmers to reproduce failures - investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step - to halt a program at any program statement and to set and examine






18. A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of defects and provide reporting facilities.






19. Testing carried out informally; no formal test preparation takes place - no recognized test design technique is used - there are no expectations for results and arbitrariness guides the test execution activity.






20. The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. [ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied ne






21. The first executable statement within a component.






22. A type of test tool that enables data to be selected from existing databases or created - generated - manipulated and edited for use in testing.






23. A tool used to check that no brtoken hyperlinks are present on a web site.






24. The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability. The ease with which the software product can be transferred from one hardw






25. An aggregation of hardware - software or both - that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]






26. A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions. [Chow] See also state transition testing. A black box test design technique in which test cases are designed to execute valid and i






27. A tool that provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides other support for the developer - such as debugging capabilities. [Graham]






28. An abstract representation of all possible sequences of events (paths) in the execution through a component or system.






29. The process of testing to determine the reliability of a software product.






30. An entity in a programming language - which is typically the smallest indivisible unit of execution.






31. The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability. The ability of the software product to perform its required functions






32. The process of recording information about tests executed into a test log.






33. An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).






34. Testing performed to expose defects in the interfaces and interaction between integrated components.






35. A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning - engineering and managing product development and maintenance. CMMI






36. The function to check on the contents of libraries of configuration items - e.g. for standards compliance. [IEEE 610]






37. A systematic evaluation of software acquisition - supply - development - operation - or maintenance process - performed by or on behalf of management that monitors progress - determines the status of plans and schedules - confirms requirements and th






38. A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.






39. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.






40. The physical or functional manifestation of a failure. For example - a system in failure mode may be characterized by slow operation - incorrect outputs - or complete termination of execution. [IEEE 610]






41. Tests aimed at showing that a component or system does not work. Negative testing is related to the testers' attitude rather than a specific test approach or test design technique - e.g. testing with invalid input values or exceptions. [After Beizer]






42. A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing. A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 tr






43. The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.






44. The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]






45. Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so - which test cases are needed.






46. A path by which the original input to a process (e.g. data) can be traced back through the process - taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]






47. A test is deemed to fail if its actual result does not match its expected result.






48. An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed - e.g. statement coverage - decision coverage or condition coverage.






49. Data that exists (for example - in a database) before a test is executed - and that affects or is affected by the component or system under test.






50. The capability of the software product to enable the user to operate and control it. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO