Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






2. A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. See also test management. The planning - estimating - monito






3. The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path ("predicate" use).






4. Testing to determine the security of the software product. See also functionality testing. The process of testing to determine the functionality of a software product.






5. An element of configuration management - consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. [IEEE 610]






6. A Linear Code Sequence And Jump - consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements - the end of the linear sequence - and the targe






7. The organizational artifacts needed to perform testing - consisting of test environments - test tools - office environment and procedures.






8. The process of finding - analyzing and removing the causes of failures in software.






9. A requirement that does not relate to functionality - but to attributes such as reliability - efficiency - usability - maintainability and portability.






10. Testing to determine the extent to which the software product is understood - easy to learn - easy to operate and attractive to the users under specified conditions. [After ISO 9126]






11. The tracing of requirements through the layers of development documentation to components.






12. An abstract representation of all possible sequences of events (paths) in the execution through a component or system.






13. The degree of uniformity - standardization - and freedom from contradiction among the documents or parts of a component or system. [IEEE 610]






14. A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution - response time measu






15. The set from which valid input and/or output values can be selected.






16. A tool that provides support to the test management and control part of a test process. It often has several capabilities - such as testware management - scheduling of tests - the logging of results - progress tracking - incident management and test






17. The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms - e.g. lines-of-code - number of classes or function points).






18. A tool that carries out static analysis.






19. A test is deemed to pass if its actual result matches its expected result.






20. Testing carried out informally; no formal test preparation takes place - no recognized test design technique is used - there are no expectations for results and arbitrariness guides the test execution activity.






21. A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management. The planning - estimating - monitoring and co






22. A path that cannot be exercised by any set of possible input values.






23. The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]






24. A white box test design technique in which test cases are designed to execute branches.






25. Any (work) product that must be delivered to someone other than the (work) product's author.






26. Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. [Musa]






27. Commonly used to refer to a test procedure specification - especially an automated one.






28. The number or category assigned to an attribute of an entity by making a measurement. [ISO 14598]






29. The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.






30. A test environment comprised of stubs and drivers needed to execute a test.






31. Execution of the test process against a single identifiable release of the test object.






32. An executable statement where a variable is assigned a value.






33. Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.






34. Any condition that deviates from expectation based on requirements specifications - design documents - user documents - standards - etc. or from someone's perception or experience. Anomalies may be found during - but not limited to - reviewing - test






35. Procedure used to derive and/or select test cases.






36. The percentage of combinations of all single condition outcomes within one statement that have been exercised by a test suite. 100% multiple condition coverage implies 100% condition determination coverage.






37. The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO






38. The person who records each defect mentioned and any suggestions for process improvement during a review meeting - on a logging form. The scribe has to ensure that the logging form is readable and understandable.






39. A development life cycle where a project is broken into a series of increments - each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the approp






40. The activity of establishing or updating a test plan.






41. A set of exit criteria.






42. A sequence of one or more consecutive executable statements containing no branches. Note: A node in a control flow graph represents a basic block.






43. A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.






44. A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.






45. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






46. An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [After Bach]






47. The total costs incurred on quality activities and issues and often split into prevention costs - appraisal costs - internal failure costs and external failure costs.






48. A form of static analysis based on the definition and usage of variables.






49. The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out - starting points regarding the test process - the test design






50. An extension of FMEA - as in addition to the basic FMEA - it includes a criticality analysis - which is used to chart the probability of failure modes against the severity of their consequences. The result highlights failure modes with relatively hig