Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.






2. The physical or functional manifestation of a failure. For example - a system in failure mode may be characterized by slow operation - incorrect outputs - or complete termination of execution. [IEEE 610]






3. The process of recognizing - investigating - taking action and disposing of incidents. It involves logging incidents - classifying them and identifying the impact. [After IEEE 1044]






4. (1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. See also Capability Maturity Model - Test Maturity Model. (2) The capability of the software product to avoid failure as a res






5. An aggregation of hardware - software or both - that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]






6. A diagram that depicts the states that a component or system can assume - and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]






7. The result of a decision (which therefore determines the branches to be taken).






8. The capability of the software product to enable the user to understand whether the software is suitable - and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability. The capability of the software to be understo






9. A description of a component's function in terms of its output values for specified input values under specified conditions - and required non-functional behavior (e.g. resource utilization).






10. The process of evaluating behavior - e.g. memory performance - CPU usage - of a system or component during execution. [After IEEE 610]






11. The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.






12. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]






13. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.






14. The process of running a test on the component or system under test - producing actual result(s).






15. A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]






16. A minimal software item that can be tested in isolation.






17. Testing of software or specification by manual simulation of its execution. See also static analysis. Analysis of software artifacts - e.g. requirements or code - carried out without execution of these software artifacts.






18. A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.






19. A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.






20. Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.






21. A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).






22. A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






23. A systematic way of testing all-pair combinations of variables using orthogonal arrays. It significantly reduces the number of all combinations of variables to test all pair combinations. See also pairwise testing. A black box test design technique i






24. A distinct set of test activities collected into a manageable phase of a project - e.g. the execution activities of a test level. [After Gerrard]






25. A tool that provides support for testing security characteristics and vulnerabilities.






26. A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]






27. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






28. The set of generic and specific conditions - agreed upon with the stakeholders - for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding p






29. Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. [Musa]






30. A review not based on a formal (documented) procedure.






31. A type of performance testing conducted to evaluate the behavior of a component or system with increasing load - e.g. numbers of parallel users and/or numbers of transactions - to determine what load can be handled by the component or system. See als






32. A set of one or more test cases. [IEEE 829]






33. A black box test design technique in which test cases are designed to execute user scenarios.






34. The activity of establishing or updating a test plan.






35. The set from which valid input and/or output values can be selected.






36. Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so - which test cases are needed.






37. The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]






38. A program of activities designed to improve the performance and maturity of the organization's processes - and the result of such a program. [CMMI]






39. An attribute of a component or system specified or implied by requirements documentation (for example reliability - usability or design constraints). [After IEEE 1008]






40. A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.






41. The assessment of change to the layers of development documentation - test documentation and components - in order to implement a given change to specified requirements.






42. The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.






43. A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects) - which can be used to design test cases.






44. Testing based on an analysis of the internal structure of the component or system.






45. A form of static analysis based on the definition and usage of variables.






46. A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases. See also operational profile testing. Statistical testing using a model of system operations (short duration tasks)






47. A minimal software item that can be tested in isolation.






48. A system whose failure or malfunction may result in death or serious injury to people - or loss or severe damage to equipment - or environmental harm.






49. An element of storage in a computer that is accessible by a software program by referring to it by a name.






50. A sequence of events - e.g. executable statements - of a component or system from an entry point to an exit point.