Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms - e.g. lines-of-code - number of classes or function points).






2. The behavior predicted by the specification - or another source - of the component or system under specified conditions.






3. A test tool to perform automated test comparison of actual results with expected results.






4. A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.






5. A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark) - a user-manual - or an individual's specialized knowledge - but should not be the code. [Afte






6. A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]






7. Procedure used to derive and/or select test cases.






8. The process of running a test on the component or system under test - producing actual result(s).






9. An entity or property used as a basis for test coverage - e.g. equivalence partitions or code statements.






10. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]






11. The ratio of the number of failures of a given category to a given unit of measure - e.g. failures per unit of time - failures per number of transactions - failures per number of computer runs. [IEEE 610]






12. A document reporting on any event that occurred - e.g. during the testing - which requires investigation. [After IEEE 829]






13. The percentage of equivalence partitions that have been exercised by a test suite.






14. Testing of a component or system at specification or implementation level without execution of that software - e.g. reviews or static code analysis.






15. The planning - estimating - monitoring and control of test activities - typically carried out by a test manager.






16. An abstract representation of the sequence and possible changes of the state of data objects - where the state of an object is any of: creation - usage - or destruction. [Beizer]






17. Testing aimed at ensuring that the component or system can operate in conjunction with new or existing users' business procedures or operational procedures.






18. A program element is said to be exercised by a test case when the input value causes the execution of that element - such as a statement - decision - or other structural element.






19. The process of testing to determine the interoperability of a software product. See also functionality testing. The process of testing to determine the functionality of a software product.






20. A statement which - when compiled - is translated into object code - and which will be executed procedurally when the program is running and may perform an action on data.






21. An instance of an input. See also input. A variable (whether stored within a component or outside) that is read by a component.






22. A continuous framework for test process improvement that describes the key elements of an effective test process - especially targeted at system testing and acceptance testing.






23. A human action that produces an incorrect result. [After IEEE 610]






24. The capability of the software product to use appropriate amounts and types of resources - for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files - when the software performs its






25. A path by which the original input to a process (e.g. data) can be traced back through the process - taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]






26. A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers - check pointer arithmetic and to monitor the allocation - use and de-allocation of memory and to flag mem






27. Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]






28. Testing that runs test cases that failed the last time they were run - in order to verify the success of corrective actions.






29. Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so - which test cases are needed.






30. A device - computer program or system used during testing - which behaves or operates like a given system when provided with a set of controlled inputs. [After IEEE 610 - DO178b] See also emulator. A device - computer program - or system that accepts






31. Behavior of a component or system in response to erroneous input - from either a human user or from another component or system - or to an internal failure.






32. The process of testing to determine the compliance of the component or system.






33. A defect in a program's dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it - eventually causing the program to fail due to lack of memory.






34. A test approach in which the test suite comprises all combinations of input values and preconditions.






35. A tool that supports stress testing.






36. The process of demonstrating the ability to fulfill specified requirements. Note the term 'qualified' is used to designate the corresponding status. [ISO 9000]






37. A type of peer review that relies on visual examination of documents to detect defects - e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a docu






38. A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning - engineering and managing product development and maintenance. CMMI






39. An incremental approach to integration testing where the lowest level components are tested first - and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested






40. A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.






41. The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).






42. The degree of uniformity - standardization - and freedom from contradiction among the documents or parts of a component or system. [IEEE 610]






43. Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black






44. A scripting technique that stores test input and expected results in a table or spreadsheet - so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution to






45. The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]






46. A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases. See also operational profile testing. Statistical testing using a model of system operations (short duration tasks)






47. The process of developing and prioritizing test procedures - creating test data and - optionally - preparing test harnesses and writing automated test scripts.






48. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






49. The degree - expressed as a percentage - to which a specified coverage item has been exercised by a test suite.






50. Artifacts produced during the test process required to plan - design - and execute tests - such as documentation - scripts - inputs - expected results - set-up and clear-up procedures - files - databases - environment - and any additional software or