Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The testing of individual software components. [After IEEE 610]






2. Testing in which two or more variants of a component or system are executed with the same inputs - the outputs compared - and analyzed in cases of discrepancies. [IEEE 610]






3. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.






4. Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing - system integration testing. Testing performed to expose defects in the interfaces and int






5. A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.






6. A minimal software item that can be tested in isolation.






7. Testing of software or specification by manual simulation of its execution. See also static analysis. Analysis of software artifacts - e.g. requirements or code - carried out without execution of these software artifacts.






8. Testing based on an analysis of the internal structure of the component or system.






9. A tool that provides support to the test management and control part of a test process. It often has several capabilities - such as testware management - scheduling of tests - the logging of results - progress tracking - incident management and test






10. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.






11. An item or event of a component or system that could be verified by one or more test cases - e.g. a function - transaction - feature - quality attribute - or structural element.






12. Testing conducted to evaluate a component or system in its operational environment. [IEEE 610]






13. A test is deemed to fail if its actual result does not match its expected result.






14. The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability. The ease with which the software product can be transferred from one hardw






15. Testing that involves the execution of the software of a component or system.






16. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






17. [Beizer] A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






18. A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing. A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 tr






19. Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.






20. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]






21. A scripting technique that stores test input and expected results in a table or spreadsheet - so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution to






22. A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management. The planning - estimating - monitoring and co






23. A series which appears to be random but is in fact generated according to some prearranged sequence.






24. An aggregation of hardware - software or both - that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]






25. A model structure wherein attaining the goals of a set of process areas establishes a maturity level; each level builds a foundation for subsequent levels. [CMMI]






26. A path by which the original input to a process (e.g. data) can be traced back through the process - taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]






27. An approach to testing to reduce the level of product risks and inform stakeholders on their status - starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.






28. The fundamental test process comprises test planning and control - test analysis and design - test implementation and execution - evaluating exit criteria and reporting - and test closure activities.






29. The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms - e.g. lines-of-code - number of classes or function points).






30. A superior method or innovative practice that contributes to the improved performance of an organization under given context - usually recognized as 'best' by other peer organizations.






31. A software tool that translates programs expressed in a high order language into their machine language equivalents. [IEEE 610]






32. The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]






33. An abstract representation of the sequence and possible changes of the state of data objects - where the state of an object is any of: creation - usage - or destruction. [Beizer]






34. The total costs incurred on quality activities and issues and often split into prevention costs - appraisal costs - internal failure costs and external failure costs.






35. An instance of an input. See also input. A variable (whether stored within a component or outside) that is read by a component.






36. The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]






37. Directed and focused attempt to evaluate the quality - especially reliability - of a test object by attempting to force specific failures to occur.






38. Procedure used to derive and/or select test cases.






39. The exit criteria that a component or system must satisfy in order to be accepted by a user - customer - or other authorized entity. [IEEE 610]






40. The ratio of the number of failures of a given category to a given unit of measure - e.g. failures per unit of time - failures per number of transactions - failures per number of computer runs. [IEEE 610]






41. The representation of a distinct set of tasks performed by the component or system - possibly based on user behavior when interacting with the component or system - and their probabilities of occurance. A task is logical rather that physical and can






42. A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. See also test management. The planning - estimating - monito






43. An extension of FMEA - as in addition to the basic FMEA - it includes a criticality analysis - which is used to chart the probability of failure modes against the severity of their consequences. The result highlights failure modes with relatively hig






44. The organizational artifacts needed to perform testing - consisting of test environments - test tools - office environment and procedures.






45. A formula based test estimation method based on function point analysis. [TMap]






46. The process of testing to determine the compliance of the component or system.






47. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black-box test design technique Procedure to derive and/or select test cases based on an analysis of the specification - either fu






48. The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.






49. Comparison of actual and expected results - performed after the software has finished running.






50. Operational testing in the acceptance test phase - typically performed in a simulated real-life operational environment by operator and/or administrator focusing on operational aspects - e.g. recoverability - resource-behavior - installability and te