Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A test plan that typically addresses one test phase. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testin






2. A tool that carries out static analysis.






3. The behavior produced/observed when a component or system is tested.






4. An uninterrupted period of time spent in executing tests. In exploratory testing - each test session is focused on a charter - but testers can also explore new opportunities or issues during a session. The tester creates and executes test cases on th






5. A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available. See also low level test case. A test case with concrete (






6. Definition of user profiles in performance - load and/or stress testing. Profiles should reflect anticipated or actual usage based on an operational profile of a component or system - and hence the expected workload. See also load profile - operation






7. A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements - as opposed to the integration of components by levels of a hierarchy.






8. A white box test design technique in which test cases are designed to execute decision outcomes.






9. A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]






10. An incremental approach to integration testing where the component at the top of the component hierarchy is tested first - with lower level components being simulated by stubs. Tested components are then used to test lower level components. The proce






11. The physical or functional manifestation of a failure. For example - a system in failure mode may be characterized by slow operation - incorrect outputs - or complete termination of execution. [IEEE 610]






12. A description of a component's function in terms of its output values for specified input values under specified conditions - and required non-functional behavior (e.g. resource utilization).






13. A path for which a set of input values and preconditions exists which causes it to be executed.






14. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






15. The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing. Testing based on an analysis of the specification of the functionality of a compo






16. The exit criteria that a component or system must satisfy in order to be accepted by a user - customer - or other authorized entity. [IEEE 610]






17. A tool used by programmers to reproduce failures - investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step - to halt a program at any program statement and to set and examine






18. The ability to identify related items in documentation and software - such as requirements with associated tests. See also horizontal traceability - vertical traceability. The tracing of requirements for a test level through the layers of test docume






19. The evaluation of a condition to True or False.






20. The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]






21. Supplied software on any suitable media - which leads the installer through the installation process. It normally runs the installation process - provides feedback on installation results - and prompts for options.






22. A questionnaire based usability test technique to evaluate the usability - e.g. user-satisfaction - of a component or system. [Veenendaal]






23. A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






24. The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]






25. The ratio of the number of failures of a given category to a given unit of measure - e.g. failures per unit of time - failures per number of transactions - failures per number of computer runs. [IEEE 610]






26. A technique used to analyze the causes of faults (defects). The technique visually models how logical relationships between failures - human errors - and external events can combine to cause specific faults to disclose.






27. A sequence of transactions in a dialogue between a user and the system with a tangible result.






28. A special instance of a smoke test to decide if the component or system is ready for detailed and further testing. An intake test is typically carried out at the start of the test execution phase. See also smoke test. A subset of all defined/planned






29. The process of testing to determine the performance of a software product. See also efficiency testing. The process of testing to determine the efficiency of a software product.






30. Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]






31. A form of static analysis based on a representation of sequences of events (paths) in the execution through a component or system.






32. An element of configuration management - consisting of the recording and reporting of information needed to manage a configuration effectively. This information includes a listing of the approved configuration identification - the status of proposed






33. A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).






34. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]






35. The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability. The ease with which the software product can be transferred from one hardw






36. A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.






37. A variable (whether stored within a component or outside) that is written by a component.






38. Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black






39. A specific category of risk related to the type of testing that can mitigate (control) that category. For example the risk of user-interactions being misunderstood can be mitigated by usability testing.






40. The capability of the software product to enable modified software to be tested. [ISO 9126] See also maintainability. The ease with which a software product can be modified to correct defects - modified to meet new requirements - modified to make fut






41. Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.






42. A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.






43. A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements - including the constraints of time - cost and resources. [ISO 9000]






44. A development life cycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product - a subset of the final product under dev






45. The assessment of change to the layers of development documentation - test documentation and components - in order to implement a given change to specified requirements.






46. Supplied instructions on any suitable media - which guides the installer through the installation process. This may be a manual guide - step-by-step procedure - installation wizard - or any other similar process description.






47. A high level metric of effectiveness and/or efficiency used to guide and control progressive test development - e.g. Defect Detection Percentage (DDP).






48. Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity - the estimation of the needed resources -






49. A subset of all defined/planned test cases that cover the main functionality of a component or system - to ascertaining that the most crucial functions of a program work - but not bothering with finer details. A daily build and smoke test is among in






50. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.