Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]






2. A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning - engineering and managing product development and maintenance. CMMI






3. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






4. A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of condition determination coverage. [TMap]






5. A five level staged framework for test process improvement - related to the Capability Maturity Model Integration (CMMI) - that describes the key elements of an effective test process.






6. The level of (business) importance assigned to an item - e.g. defect.






7. A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of defects and provide reporting facilities. See also incid






8. The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.






9. A program of activities designed to improve the performance and maturity of the organization's processes - and the result of such a program. [CMMI]






10. A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.






11. A five level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers best-practices for planning - engineering and managing software development and maintenance. [CMM] See also Capabilit






12. Analysis of source code carried out without execution of that software.






13. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.






14. The percentage of boundary values that have been exercised by a test suite.






15. A requirement that does not relate to functionality - but to attributes such as reliability - efficiency - usability - maintainability and portability.






16. An integration test type that is concerned with testing the interfaces between components or systems.






17. An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements - e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.






18. The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.






19. The first executable statement within a component.






20. The process of testing the installability of a software product. See also portability testing. The process of testing to determine the portability of a software product.






21. A factor that could result in future negative consequences; usually expressed as impact and likelihood.






22. A diagram that depicts the states that a component or system can assume - and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]






23. A test basis document that can only be amended by a formal change control process. See also baseline. A specification or software product that has been formally reviewed or agreed upon - that thereafter serves as the basis for further development - a






24. The representation of a distinct set of tasks performed by the component or system - possibly based on user behavior when interacting with the component or system - and their probabilities of occurance. A task is logical rather that physical and can






25. A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases. See also operational profile testing. Statistical testing using a model of system operations (short duration tasks)






26. A type of performance testing conducted to evaluate the behavior of a component or system with increasing load - e.g. numbers of parallel users and/or numbers of transactions - to determine what load can be handled by the component or system. See als






27. A tool that carries out static code analysis. The tool checks source code - for certain properties such as conformance to coding standards - quality metrics or data flow anomalies.






28. An instance of an output. See also output.A variable (whether stored within a component or outside) that is written by a component.






29. A document identifying test items - their configuration - current status and other delivery information delivered by development to testing - and possibly other stakeholders - at the start of a test execution phase. [After IEEE 829]






30. An abstract representation of the sequence and possible changes of the state of data objects - where the state of an object is any of: creation - usage - or destruction. [Beizer]






31. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process






32. An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).






33. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






34. An aggregation of hardware - software or both - that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]






35. Procedure used to derive and/or select test cases.






36. The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.






37. A reason or purpose for designing and executing a test.






38. A black box test design technique in which test cases are designed to execute user scenarios.






39. A test is deemed to fail if its actual result does not match its expected result.






40. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues






41. Testing using input values that should be rejected by the component or system. See also error tolerance. The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].






42. The response of a component or system to a set of input values and preconditions.






43. A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management. The planning - estimating - monitoring and co






44. A tool for seeding (i.e. intentionally inserting) faults in a component or system.






45. The tracing of requirements through the layers of development documentation to components.






46. The exit criteria that a component or system must satisfy in order to be accepted by a user - customer - or other authorized entity. [IEEE 610]






47. Testing where the system is subjected to large volumes of data. See also resource-utilization testing. The process of testing to determine the resource-utilization of a software product.






48. Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems - database management systems - and other applications.






49. Comparison of actual and expected results - performed while the software is being executed - for example by a test execution tool.






50. The process of testing to determine the recoverability of a software product. See also reliability testing. The process of testing to determine the reliability of a software product.