Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A subset of all defined/planned test cases that cover the main functionality of a component or system - to ascertaining that the most crucial functions of a program work - but not bothering with finer details. A daily build and smoke test is among in






2. A document reporting on any event that occurred - e.g. during the testing - which requires investigation. [After IEEE 829]






3. Operational testing in the acceptance test phase - typically performed in a simulated real-life operational environment by operator and/or administrator focusing on operational aspects - e.g. recoverability - resource-behavior - installability and te






4. An uninterrupted period of time spent in executing tests. In exploratory testing - each test session is focused on a charter - but testers can also explore new opportunities or issues during a session. The tester creates and executes test cases on th






5. The process of testing to determine the recoverability of a software product. See also reliability testing. The process of testing to determine the reliability of a software product.






6. The set from which valid input values can be selected. See also domain. The set from which valid input and/or output values can be selected.






7. A test whereby real-life users are involved to evaluate the usability of a component or system.






8. The use of software - e.g. capture/playback tools - to control the execution of tests - the comparison of actual results to expected results - the setting up of test preconditions - and other test control and reporting functions.






9. A test case that cannot be executed because the preconditions for its execution are not fulfilled.






10. A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators. See also high level te






11. The assessment of change to the layers of development documentation - test documentation and components - in order to implement a given change to specified requirements.






12. The degree of uniformity - standardization - and freedom from contradiction among the documents or parts of a component or system. [IEEE 610]






13. A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved.






14. Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site - but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acce






15. Testing carried out informally; no formal test preparation takes place - no recognized test design technique is used - there are no expectations for results and arbitrariness guides the test execution activity.






16. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testing tasks - who will do each task - degree of tester independence - the tes






17. Testing of software or specification by manual simulation of its execution. See also static analysis. Analysis of software artifacts - e.g. requirements or code - carried out without execution of these software artifacts.






18. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.






19. Testing conducted to evaluate a component or system in its operational environment. [IEEE 610]






20. A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of defects and provide reporting facilities.






21. The effect on the component or system by the measurement instrument when the component or system is being measured - e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being






22. A test plan that typically addresses one test level. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testin






23. A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract - standard - specification - or other formally imposed document. [After IEEE 610






24. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]






25. All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure - then the test basis is called a frozen t






26. A memory access defect due to the attempt by a process to store data beyond the boundaries of a fixed length buffer - resulting in overwriting of adjacent memory areas or the raising of an overflow exception. See also buffer. A device or storage area






27. A device or storage area used to store data temporarily for differences in rates of data flow - time or occurrence of events - or amounts of data that can be handeld by the devices or processes involved in the transfer or use of the data. [IEEE 610]






28. The component or system to be tested. See also test item. The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.






29. A portion of an input or output domain for which the behavior of a component or system is assumed to be the same - based on the specification.






30. The capability of the software product to be attractive to the user. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]






31. A computational model consisting of a finite number of states and transitions between those states - possibly with accompanying actions. [IEEE 610]






32. A development activity where a complete system is compiled and linked every day (usually overnight) - so that a consistent system is available at any time including all latest changes.






33. Any event occurring that requires investigation. [After IEEE 1008]






34. The tracing of requirements for a test level through the layers of test documentation (e.g. test plan - test design specification - test case specification and test procedure specification or test script).






35. The capability of the software product to achieve acceptable levels of risk of harm to people - business - software - property or the environment in a specified context of use. [ISO 9126]






36. A white box test design technique in which test cases are designed to execute branches.






37. The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.






38. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






39. The capability of the software product to enable specified modifications to be implemented. [ISO 9126] See also maintainability. The ease with which a software product can be modified to correct defects - modified to meet new requirements - modified






40. The set of generic and specific conditions for permitting a process to go forward with a defined task - e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort n






41. An element of configuration management - consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. [IEEE 610]






42. The fundamental test process comprises test planning and control - test analysis and design - test implementation and execution - evaluating exit criteria and reporting - and test closure activities.






43. Decision rules used to determine whether a test item (function) or feature has passed or failed a test. [IEEE 829]






44. During the test closure phase of a test process data is collected from completed activities to consolidate experience - testware - facts and numbers. The test closure phase consists of finalizing and archiving the testware and evaluating the test pro






45. The consequence/outcome of the execution of a test. It includes outputs to screens - changes to data - reports - and communication messages sent out. See also actual result - expected result. The behavior produced/observed when a component or system






46. The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]






47. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]






48. The leader and main person responsible for an inspection or other review process.






49. An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.






50. An element of configuration management - consisting of the evaluation - co-ordination - approval or disapproval - and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]