Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A software product that supports one or more test activities - such as planning and control - specification - building initial files and data - test execution and test analysis. [TMap] See also CAST. Acronym for Computer Aided Software Testing.






2. The response of a component or system to a set of input values and preconditions.






3. A sequence of events (paths) in the execution through a component or system.






4. An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements - e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.






5. The component or system to be tested. See also test item. The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.






6. A program of activities designed to improve the performance and maturity of the organization's processes - and the result of such a program. [CMMI]






7. A requirement that specifies a function that a component or system must perform. [IEEE 610]






8. Choosing a set of input values to force the execution of a given path.






9. Testing where components or systems are integrated and tested one or some at a time - until all the components or systems are integrated and tested.






10. A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]






11. Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange - Internet).






12. Acronym for Commercial Off-The-Shelf software. See off-the-shelf software. A software product that is developed for the general market - i.e. for a large number of customers - and that is delivered to many customers in identical format.






13. A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.






14. A black box test design technique in which test cases are designed based on boundary values. See also boundary value. An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either si






15. The process of testing to determine the maintainability of a software product.






16. The percentage of executable statements that have been exercised by a test suite.






17. The process of recognizing - investigating - taking action and disposing of incidents. It involves logging incidents - classifying them and identifying the impact. [After IEEE 1044]






18. A path by which the original input to a process (e.g. data) can be traced back through the process - taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]






19. The process of testing to determine the portability of a software product.






20. Testing of software used to convert data from existing systems for use in replacement systems.






21. A document that consists of a test design specification - test case specification and/or test procedure specification.






22. A program element is said to be exercised by a test case when the input value causes the execution of that element - such as a statement - decision - or other structural element.






23. A white box test design technique in which test cases are designed to execute definition and use pairs of variables.






24. The planning - estimating - monitoring and control of test activities - typically carried out by a test manager.






25. A development activity where a complete system is compiled and linked every day (usually overnight) - so that a consistent system is available at any time including all latest changes.






26. A document specifying a set of test cases (objective - inputs - test actions - expected results - and execution preconditions) for a test item. [After IEEE 829]






27. Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems - database management systems - and other applications.






28. The capability of the software product to enable modified software to be tested. [ISO 9126] See also maintainability. The ease with which a software product can be modified to correct defects - modified to meet new requirements - modified to make fut






29. Testing to determine the security of the software product. See also functionality testing. The process of testing to determine the functionality of a software product.






30. Deviation of the component or system from its expected delivery - service or result. [After Fenton]






31. Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. [Musa]






32. Testing aimed at ensuring that the component or system can operate in conjunction with new or existing users' business procedures or operational procedures.






33. A program point at which the control flow has two or more alternative routes. A node with two or more links to separate branches.






34. Commonly used to refer to a test procedure specification - especially an automated one.






35. Supplied software on any suitable media - which leads the installer through the installation process. It normally runs the installation process - provides feedback on installation results - and prompts for options.






36. The capability of the software product to be installed in a specified environment [ISO 9126]. See also portability. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]






37. A skilled professional who is involved in the testing of a component or system.






38. Execution of a test on a specific version of the test object.






39. An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [After Bach]






40. Testing to determine the robustness of the software product.






41. A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]






42. A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution - response time measu






43. Testing performed to expose defects in the interfaces and interaction between integrated components.






44. [Beizer] A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






45. The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO






46. A procedure to derive and/or select test cases targeted at one or more defect categories - with tests being developed from what is known about the specific defect category. See also defect taxonomy. A system of (hierarchical) categories designed to b






47. A high level metric of effectiveness and/or efficiency used to guide and control progressive development - e.g. lead-time slip for software development. [CMMI]






48. The degree to which a component or system has a design and/or internal structure that is difficult to understand - maintain and verify. See also cyclomatic complexity. The number of independent paths through a program. Cyclomatic complexity is define






49. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.






50. The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]