Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The planning - estimating - monitoring and control of test activities - typically carried out by a test manager.






2. A specification of the activity which a component or system being tested may experience in production. A load profile consists of a designated number of virtual users who process a defined set of transactions in a specified time period and according






3. The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied needs when the sof






4. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






5. Behavior of a component or system in response to erroneous input - from either a human user or from another component or system - or to an internal failure.






6. The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].






7. Definition of user profiles in performance - load and/or stress testing. Profiles should reflect anticipated or actual usage based on an operational profile of a component or system - and hence the expected workload. See also load profile - operation






8. A statement which - when compiled - is translated into object code - and which will be executed procedurally when the program is running and may perform an action on data.






9. Testing where the system is subjected to large volumes of data. See also resource-utilization testing. The process of testing to determine the resource-utilization of a software product.






10. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]






11. A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark) - a user-manual - or an individual's specialized knowledge - but should not be the code. [Afte






12. Recording the details of any incident that occurred - e.g. during testing.






13. Procedure to derive and/or select test cases based on an analysis of the specification - either functional or non-functional - of a component or system without reference to its internal structure.






14. The process of testing to determine the portability of a software product.






15. The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]






16. A tree showing equivalence parititions hierarchically ordered - which is used to design test cases in the classification tree method. See also classification tree method. A black box test design technique in which test cases - described by means of a






17. Code that cannot be reached and therefore is impossible to execute.






18. The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]






19. The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.






20. Acronym for Computer Aided Software Testing. See also test automation.The use of software to perform or support test activities - e.g. test management - test design - test execution and results checking.






21. The ease with which a software product can be modified to correct defects - modified to meet new requirements - modified to make future maintenance easier - or adapted to a changed environment. [ISO 9126]






22. A test plan that typically addresses multiple test levels. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the






23. Procedure to derive and/or select test cases for nonfunctional testing based on an analysis of the specification of a component or system without reference to its internal structure. See also black box test design technique. Procedure to derive and/o






24. A program element is said to be exercised by a test case when the input value causes the execution of that element - such as a statement - decision - or other structural element.






25. The capability of the software product to be attractive to the user. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]






26. A white box test design technique in which test cases are designed to execute branches.






27. A scripting technique that uses data files to contain not only test data and expected results - but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control scrip






28. Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.






29. The process of testing the installability of a software product. See also portability testing. The process of testing to determine the portability of a software product.






30. Testing the methods and processes used to access and manage the data(base) - to ensure access methods - processes and data rules function as expected and that during access to the database - data is not corrupted or unexpectedly deleted - updated or






31. A device - computer program or system used during testing - which behaves or operates like a given system when provided with a set of controlled inputs. [After IEEE 610 - DO178b] See also emulator. A device - computer program - or system that accepts






32. A point in time in a project at which defined (intermediate) deliverables and results should be ready.






33. A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal] See also decision table. A table showing combinations of inputs and/or stimuli (c






34. The percentage of definition-use pairs that have been exercised by a test suite.






35. An abstract representation of all possible sequences of events (paths) in the execution through a component or system.






36. An uninterrupted period of time spent in executing tests. In exploratory testing - each test session is focused on a charter - but testers can also explore new opportunities or issues during a session. The tester creates and executes test cases on th






37. A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence. See also Failure Mode - Effect and Criticality Analysis (FMECA).






38. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]






39. A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).






40. A variable (whether stored within a component or outside) that is read by a component.






41. A diagram that depicts the states that a component or system can assume - and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]






42. The evaluation of a condition to True or False.






43. An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed - e.g. statement coverage - decision coverage or condition coverage.






44. The behavior produced/observed when a component or system is tested.






45. An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.






46. An attribute of a component or system specified or implied by requirements documentation (for example reliability - usability or design constraints). [After IEEE 1008]






47. Testware used in automated testing - such as tool scripts.






48. A grid showing the resulting transitions for each state combined with each possible event - showing both valid and invalid transitions.






49. A description of a component's function in terms of its output values for specified input values under specified conditions - and required non-functional behavior (e.g. resource utilization).






50. The percentage of boundary values that have been exercised by a test suite.