Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testing tasks - who will do each task - degree of tester independence - the tes






2. Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity - the estimation of the needed resources -






3. A group of test activities aimed at testing a component or system focused on a specific test objective - i.e. functional test - usability test - regression test etc. A test type may take place on one or more test levels or test phases. [After TMap]






4. Tests aimed at showing that a component or system does not work. Negative testing is related to the testers' attitude rather than a specific test approach or test design technique - e.g. testing with invalid input values or exceptions. [After Beizer]






5. A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. [Gilb and Graham - IEEE 1028] See also peer review. A review of a software work product by colleagues of the producer of the product for the p






6. Procedure to derive and/or select test cases based on an analysis of the specification - either functional or non-functional - of a component or system without reference to its internal structure.






7. Testing where components or systems are integrated and tested one or some at a time - until all the components or systems are integrated and tested.






8. The activity of establishing or updating a test plan.






9. Choosing a set of input values to force the execution of a given path.






10. The percentage of equivalence partitions that have been exercised by a test suite.






11. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






12. A set of exit criteria.






13. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






14. The process of testing to determine the resource-utilization of a software product. See also efficiency testing. The process of testing to determine the efficiency of a software product.






15. An attribute of a component or system specified or implied by requirements documentation (for example reliability - usability or design constraints). [After IEEE 1008]






16. The period of time in the software life cycle during which the requirements for a software product are defined and documented. [IEEE 610]






17. A test is deemed to pass if its actual result matches its expected result.






18. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.






19. The process of testing to determine the maintainability of a software product.






20. A path for which a set of input values and preconditions exists which causes it to be executed.






21. The capability of the software product to adhere to standards - conventions or regulations in laws and similar prescriptions. [ISO 9126]






22. A test is deemed to fail if its actual result does not match its expected result.






23. A continuous framework for test process improvement that describes the key elements of an effective test process - especially targeted at system testing and acceptance testing.






24. An input for which the specification predicts a result.






25. A high level document describing the principles - approach and major objectives of the organization regarding testing.






26. A systematic evaluation of software acquisition - supply - development - operation - or maintenance process - performed by or on behalf of management that monitors progress - determines the status of plans and schedules - confirms requirements and th






27. An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing. Testing performed to expose defects in the interfaces and in the interactions between integr






28. An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).






29. The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]






30. The process of identifying risks using techniques such as brainstorming - checklists and failure history.






31. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues






32. A tool that supports stress testing.






33. A sequence of one or more consecutive executable statements containing no branches. Note: A node in a control flow graph represents a basic block.






34. Execution of a test on a specific version of the test object.






35. Non fulfillment of a specified requirement. [ISO 9000]






36. A high level metric of effectiveness and/or efficiency used to guide and control progressive development - e.g. lead-time slip for software development. [CMMI]






37. A technique used to analyze the causes of faults (defects). The technique visually models how logical relationships between failures - human errors - and external events can combine to cause specific faults to disclose.






38. The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]






39. A specification or software product that has been formally reviewed or agreed upon - that thereafter serves as the basis for further development - and that can be changed only through a formal change control process. [After IEEE 610]






40. An instance of an output. See also output.A variable (whether stored within a component or outside) that is written by a component.






41. A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved.






42. A test environment comprised of stubs and drivers needed to execute a test.






43. The process of testing the installability of a software product. See also portability testing. The process of testing to determine the portability of a software product.






44. The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied needs when the sof






45. A sequence of events (paths) in the execution through a component or system.






46. The person responsible for project management of testing activities and resources - and evaluation of a test object. The individual who directs - controls - administers - plans and regulates the evaluation of a test object.






47. A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made - and to design tests specifically to expose them.






48. A five level staged framework for test process improvement - related to the Capability Maturity Model (CMM) - that describes the key elements of an effective test process.






49. A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of defects and provide reporting facilities.






50. The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface. [ISO 9126] See also reliability - robustness. The ability of the software product