Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A device or storage area used to store data temporarily for differences in rates of data flow - time or occurrence of events - or amounts of data that can be handeld by the devices or processes involved in the transfer or use of the data. [IEEE 610]






2. Data that exists (for example - in a database) before a test is executed - and that affects or is affected by the component or system under test.






3. A tool used to check that no brtoken hyperlinks are present on a web site.






4. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






5. A tool that provides support to the review process. Typical features include review planning and tracking support - communication support - collaborative reviews and a repository for collecting and reporting of metrics.






6. An instance of an input. See also input. A variable (whether stored within a component or outside) that is read by a component.






7. The process of testing to determine the maintainability of a software product.






8. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]






9. Two persons - e.g. two testers - a developer and a tester - or an end-user and a tester - working together to find defects. Typically - they share one computer and trade control of it while testing.






10. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process






11. An extension of FMEA - as in addition to the basic FMEA - it includes a criticality analysis - which is used to chart the probability of failure modes against the severity of their consequences. The result highlights failure modes with relatively hig






12. A minimal software item that can be tested in isolation.






13. An aggregation of hardware - software or both - that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]






14. Procedure to derive and/or select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure. See also black box test design technique. Procedure to derive and/or sel






15. The process of testing to determine the reliability of a software product.






16. The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. [After IEEE 610] See also efficiency. The capability of the software product to provide appropriat






17. The process through which decisions are reached and protective measures are implemented for reducing risks to - or maintaining risks within - specified levels.






18. A tool that provides objective measures of what structural elements - e.g. statements - branches have been exercised by a test suite.






19. The percentage of equivalence partitions that have been exercised by a test suite.






20. The behavior produced/observed when a component or system is tested.






21. The criteria used to (temporarily) stop all or a portion of the testing activities on the test items. [After IEEE 829]






22. A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.






23. The testing of individual software components. [After IEEE 610]






24. A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available. See also low level test case. A test case with concrete (






25. A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers best-practices for planning - engineering and managing software development and maintenance. [CMM] See also Capabilit






26. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.






27. Directed and focused attempt to evaluate the quality - especially reliability - of a test object by attempting to force specific failures to occur.






28. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]






29. A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).






30. A tool that carries out static code analysis. The tool checks source code - for certain properties such as conformance to coding standards - quality metrics or data flow anomalies.






31. The response of a component or system to a set of input values and preconditions.






32. The process of evaluating behavior - e.g. memory performance - CPU usage - of a system or component during execution. [After IEEE 610]






33. A way of developing software where the test cases are developed - and often automated - before the software is developed to run those test cases.






34. Multiple heterogeneous - distributed systems that are embedded in networks at multiple levels and in multiple domains interconnected addressing large-scale inter-disciplinary common problems and purposes.






35. A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements - as opposed to the integration of components by levels of a hierarchy.






36. The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing. Testing based on an analysis of the specification of the functionality of a compo






37. A path that cannot be exercised by any set of possible input values.






38. An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).






39. A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing. A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 tr






40. A risk directly related to the test object. See also risk. A factor that could result in future negative consequences; usually expressed as impact and likelihood.






41. Testing the changes to an operational system or the impact of a changed environment to an operational system.






42. A white box test design technique in which test cases are designed to execute LCSAJs.






43. A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.






44. A black box test design technique in which test cases are designed to execute all possbile discrete combinations of each pair of input parameters. See also orthogonal array testing. A systematic way of testing all-pair combinations of variables using






45. The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software life cycle typically includes a concept phase - requirements phase - design phase - implementation phase - tes






46. Testing of software used to convert data from existing systems for use in replacement systems.






47. A sequence of events (paths) in the execution through a component or system.






48. A computational model consisting of a finite number of states and transitions between those states - possibly with accompanying actions. [IEEE 610]






49. A programming language in which executable test scripts are written - used by a test execution tool (e.g. a capture/playback tool).






50. Testing to determine how the occurrence of two or more activities within the same interval of time - achieved either by interleaving the activities or by simultaneous execution - is handled by the component or system. [After IEEE 610]