Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]






2. A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test - integration test - system test and acceptance test. [After TMap]






3. Execution of a test on a specific version of the test object.






4. The process of recognizing - investigating - taking action and disposing of incidents. It involves logging incidents - classifying them and identifying the impact. [After IEEE 1044]






5. A scale that constrains the type of data analysis that can be performed on it. [ISO 14598]






6. A procedure to derive and/or select test cases targeted at one or more defect categories - with tests being developed from what is known about the specific defect category. See also defect taxonomy. A system of (hierarchical) categories designed to b






7. Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange - Internet).






8. A system whose failure or malfunction may result in death or serious injury to people - or loss or severe damage to equipment - or environmental harm.






9. An entity or property used as a basis for test coverage - e.g. equivalence partitions or code statements.






10. An extension of FMEA - as in addition to the basic FMEA - it includes a criticality analysis - which is used to chart the probability of failure modes against the severity of their consequences. The result highlights failure modes with relatively hig






11. A high level metric of effectiveness and/or efficiency used to guide and control progressive development - e.g. lead-time slip for software development. [CMMI]






12. The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software life cycle typically includes a concept phase - requirements phase - design phase - implementation phase - tes






13. Artifacts produced during the test process required to plan - design - and execute tests - such as documentation - scripts - inputs - expected results - set-up and clear-up procedures - files - databases - environment - and any additional software or






14. A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]






15. A group of test activities aimed at testing a component or system focused on a specific test objective - i.e. functional test - usability test - regression test etc. A test type may take place on one or more test levels or test phases. [After TMap]






16. Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.






17. A type of test tool that is able to execute other software using an automated test script - e.g. capture/playback. [Fewster and Graham]






18. The number or category assigned to an attribute of an entity by making a measurement. [ISO 14598]






19. A tool that supports stress testing.






20. The criteria used to (temporarily) stop all or a portion of the testing activities on the test items. [After IEEE 829]






21. A specific category of risk related to the type of testing that can mitigate (control) that category. For example the risk of user-interactions being misunderstood can be mitigated by usability testing.






22. A tool that provides support to the test management and control part of a test process. It often has several capabilities - such as testware management - scheduling of tests - the logging of results - progress tracking - incident management and test






23. The behavior produced/observed when a component or system is tested.






24. Procedure to derive and/or select test cases based on the tester's experience - knowledge and intuition.






25. The process of testing to determine the maintainability of a software product.






26. Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.






27. The process of testing to determine the portability of a software product.






28. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process






29. The total costs incurred on quality activities and issues and often split into prevention costs - appraisal costs - internal failure costs and external failure costs.






30. A high level document describing the principles - approach and major objectives of the organization regarding testing.






31. A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers - check pointer arithmetic and to monitor the allocation - use and de-allocation of memory and to flag mem






32. A test approach in which the test suite comprises all combinations of input values and preconditions.






33. A chronological record of relevant details about the execution of tests. [IEEE 829]






34. The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.






35. A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.






36. The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. [ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied ne






37. A feature or characteristic that affects an item's quality. [IEEE 610]






38. A set of exit criteria.






39. The function to check on the contents of libraries of configuration items - e.g. for standards compliance. [IEEE 610]






40. The capability of the software product to enable specified modifications to be implemented. [ISO 9126] See also maintainability. The ease with which a software product can be modified to correct defects - modified to meet new requirements - modified






41. Acronym for Computer Aided Software Engineering.






42. A scripting technique that uses data files to contain not only test data and expected results - but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control scrip






43. A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal] See also decision table. A table showing combinations of inputs and/or stimuli (c






44. The process of testing to determine the functionality of a software product.






45. Procedure to derive and/or select test cases for nonfunctional testing based on an analysis of the specification of a component or system without reference to its internal structure. See also black box test design technique. Procedure to derive and/o






46. Multiple heterogeneous - distributed systems that are embedded in networks at multiple levels and in multiple domains interconnected addressing large-scale inter-disciplinary common problems and purposes.






47. A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.






48. An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed - e.g. statement coverage - decision coverage or condition coverage.






49. An instance of an input. See also input. A variable (whether stored within a component or outside) that is read by a component.






50. Attributes of software products that bear on its ability to prevent unauthorized access - whether accidental or deliberate - to programs and data. [ISO 9126] See also functionality. The capability of the software product to provide functions which me