Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.






2. A subset of all defined/planned test cases that cover the main functionality of a component or system - to ascertaining that the most crucial functions of a program work - but not bothering with finer details. A daily build and smoke test is among in






3. A black box test design technique in which test cases are designed to execute user scenarios.






4. An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing. Testing performed to expose defects in the interfaces and in the interactions between integr






5. A specification of the activity which a component or system being tested may experience in production. A load profile consists of a designated number of virtual users who process a defined set of transactions in a specified time period and according






6. A specification or software product that has been formally reviewed or agreed upon - that thereafter serves as the basis for further development - and that can be changed only through a formal change control process. [After IEEE 610]






7. The ability to identify related items in documentation and software - such as requirements with associated tests. See also horizontal traceability - vertical traceability. The tracing of requirements for a test level through the layers of test docume






8. Acronym for Commercial Off-The-Shelf software. See off-the-shelf software. A software product that is developed for the general market - i.e. for a large number of customers - and that is delivered to many customers in identical format.






9. The effect on the component or system by the measurement instrument when the component or system is being measured - e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being






10. A type of test tool that is able to execute other software using an automated test script - e.g. capture/playback. [Fewster and Graham]






11. A test is deemed to pass if its actual result matches its expected result.






12. The process of demonstrating the ability to fulfill specified requirements. Note the term 'qualified' is used to designate the corresponding status. [ISO 9000]






13. A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test - integration test - system test and acceptance test. [After TMap]






14. A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.






15. The degree to which a component or system has a design and/or internal structure that is difficult to understand - maintain and verify. See also cyclomatic complexity. The number of independent paths through a program. Cyclomatic complexity is define






16. An approach to testing to reduce the level of product risks and inform stakeholders on their status - starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.






17. Operational testing in the acceptance test phase - typically performed in a simulated real-life operational environment by operator and/or administrator focusing on operational aspects - e.g. recoverability - resource-behavior - installability and te






18. A high level metric of effectiveness and/or efficiency used to guide and control progressive development - e.g. lead-time slip for software development. [CMMI]






19. An attribute of a component or system specified or implied by requirements documentation (for example reliability - usability or design constraints). [After IEEE 1008]






20. Testing practice for a project using agile methodologies - such as extreme programming (XP) - treating development as the customer of testing and emphasizing the test first design paradigm. See also test driven development. A way of developing softwa






21. A source of a defect such that if it is removed - the occurance of the defect type is decreased or removed. [CMMI]






22. An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed - e.g. statement coverage - decision coverage or condition coverage.






23. Testing to determine the scalability of the software product.






24. The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]






25. Execution of a test on a specific version of the test object.






26. A set of interrelated activities - which transform inputs into outputs. [ISO 12207]






27. A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]






28. An attribute of a test indicating whether the same results are produced each time the test is executed.






29. Testing that runs test cases that failed the last time they were run - in order to verify the success of corrective actions.






30. The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]






31. The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing. Testing based on an analysis of the specification of the functionality of a compo






32. A white box test design technique in which test cases are designed to execute LCSAJs.






33. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






34. Directed and focused attempt to evaluate the quality - especially reliability - of a test object by attempting to force specific failures to occur.






35. An abstract representation of all possible sequences of events (paths) in the execution through a component or system.






36. An executable statement where a variable is assigned a value.






37. The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]






38. The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.






39. A variable (whether stored within a component or outside) that is read by a component.






40. Testing the attributes of a component or system that do not relate to functionality - e.g. reliability - efficiency - usability - maintainability and portability.






41. A scripting technique that stores test input and expected results in a table or spreadsheet - so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution to






42. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.






43. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






44. A development activity where a complete system is compiled and linked every day (usually overnight) - so that a consistent system is available at any time including all latest changes.






45. The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]






46. A black box test design technique in which test cases are designed to execute user scenarios.






47. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






48. The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability. The ability of the software product to perform its required functions






49. A set of one or more test cases. [IEEE 829]






50. Systematic application of procedures and practices to the tasks of identifying - analyzing - prioritizing - and controlling risk.