Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






2. A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved.






3. Multiple heterogeneous - distributed systems that are embedded in networks at multiple levels and in multiple domains interconnected addressing large-scale inter-disciplinary common problems and purposes.






4. A black box test design technique in which test cases are designed to execute user scenarios.






5. The percentage of definition-use pairs that have been exercised by a test suite.






6. A development life cycle where a project is broken into a series of increments - each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the approp






7. The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. [ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied ne






8. A black box test design technique in which test cases are designed from cause-effect graphs. [BS 7925/2]






9. A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.






10. The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]






11. A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects) - which can be used to design test cases.






12. The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.






13. Testing the attributes of a component or system that do not relate to functionality - e.g. reliability - efficiency - usability - maintainability and portability.






14. The fundamental test process comprises test planning and control - test analysis and design - test implementation and execution - evaluating exit criteria and reporting - and test closure activities.






15. The process consisting of all life cycle activities - both static and dynamic - concerned with planning - preparation and evaluation of software products and related work products to determine that they satisfy specified requirements - to demonstrate






16. Testing based on an analysis of the internal structure of the component or system.






17. Acronym for Computer Aided Software Testing. See also test automation.The use of software to perform or support test activities - e.g. test management - test design - test execution and results checking.






18. A variable (whether stored within a component or outside) that is read by a component.






19. A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.






20. A static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called "heuristics").






21. The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO






22. A high level metric of effectiveness and/or efficiency used to guide and control progressive test development - e.g. Defect Detection Percentage (DDP).






23. Procedure to derive and/or select test cases for nonfunctional testing based on an analysis of the specification of a component or system without reference to its internal structure. See also black box test design technique. Procedure to derive and/o






24. A programming language in which executable test scripts are written - used by a test execution tool (e.g. a capture/playback tool).






25. Acronym for Computer Aided Software Engineering.






26. The process of recording information about tests executed into a test log.






27. A pointer within a web page that leads to other web pages.






28. Testing to determine the security of the software product. See also functionality testing. The process of testing to determine the functionality of a software product.






29. A tool that provides support for the identification and control of configuration items - their status over changes and versions - and the release of baselines consisting of configuration items.






30. A computational model consisting of a finite number of states and transitions between those states - possibly with accompanying actions. [IEEE 610]






31. The degree - expressed as a percentage - to which a specified coverage item has been exercised by a test suite.






32. Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black






33. Comparison of actual and expected results - performed after the software has finished running.






34. A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.






35. The process of evaluating behavior - e.g. memory performance - CPU usage - of a system or component during execution. [After IEEE 610]






36. The process of recognizing - investigating - taking action and disposing of defects. It involves recording defects - classifying them and identifying the impact. [After IEEE 1044]






37. An element of storage in a computer that is accessible by a software program by referring to it by a name.






38. Execution of the test process against a single identifiable release of the test object.






39. A sequence of transactions in a dialogue between a user and the system with a tangible result.






40. A form of static analysis based on a representation of sequences of events (paths) in the execution through a component or system.






41. The activity of establishing or updating a test plan.






42. An abstract representation of all possible sequences of events (paths) in the execution through a component or system.






43. Behavior of a component or system in response to erroneous input - from either a human user or from another component or system - or to an internal failure.






44. A sequence of events (paths) in the execution through a component or system.






45. The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.






46. The number of independent paths through a program. Cyclomatic complexity is defined as: L - N + 2P - where - L = the number of edges/links in a graph - N = the number of nodes in a graph - P = the number of disconnected parts of the graph (e.g. a cal






47. A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.






48. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.






49. A document specifying the test conditions (coverage items) for a test item - the detailed test approach and identifying the associated high level test cases. [After IEEE 829]






50. A continuous framework for test process improvement that describes the key elements of an effective test process - especially targeted at system testing and acceptance testing.