Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The consequence/outcome of the execution of a test. It includes outputs to screens - changes to data - reports - and communication messages sent out.






2. Execution of the test process against a single identifiable release of the test object.






3. A model that shows the growth in reliability over time during continuous testing of a component or system as a result of the removal of defects that result in reliability failures.






4. A sequence of transactions in a dialogue between a user and the system with a tangible result.






5. An extension of FMEA - as in addition to the basic FMEA - it includes a criticality analysis - which is used to chart the probability of failure modes against the severity of their consequences. The result highlights failure modes with relatively hig






6. The evaluation of a condition to True or False.






7. An executable statement where a variable is assigned a value.






8. The period of time in the software life cycle during which the requirements for a software product are defined and documented. [IEEE 610]






9. Testing carried out informally; no formal test preparation takes place - no recognized test design technique is used - there are no expectations for results and arbitrariness guides the test execution activity.






10. A high level metric of effectiveness and/or efficiency used to guide and control progressive test development - e.g. Defect Detection Percentage (DDP).






11. Procedure to derive and/or select test cases based on the tester's experience - knowledge and intuition.






12. The process of testing to determine the efficiency of a software product.






13. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






14. The capability of the software product to use appropriate amounts and types of resources - for example the amounts of main and secondary memory used by the program and the sizes of required temporary or overflow files - when the software performs its






15. A specification of the activity which a component or system being tested may experience in production. A load profile consists of a designated number of virtual users who process a defined set of transactions in a specified time period and according






16. A tool that supports the recording of requirements - requirements attributes (e.g. priority - knowledge responsible) and annotation - and facilitates traceability through layers of requirements and requirements change management. Some requirements ma






17. A test plan that typically addresses one test phase. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testin






18. A static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called "heuristics").






19. Comparison of actual and expected results - performed while the software is being executed - for example by a test execution tool.






20. An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements - e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.






21. A tool that facilitates the recording and status tracking of incidents. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of incidents and provide reporting facilities. See also defect manage






22. An abstract representation of all possible sequences of events (paths) in the execution through a component or system.






23. A development life cycle where a project is broken into a series of increments - each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the approp






24. The process of combining components or systems into larger assemblies.






25. The process of testing to determine the maintainability of a software product.






26. A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).






27. A review characterized by documented procedures and requirements - e.g. inspection.






28. Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software - as a result of the changes made. It is performed when the software or its environment is c






29. The organizational artifacts needed to perform testing - consisting of test environments - test tools - office environment and procedures.






30. A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing. A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 tr






31. The process of testing to determine the resource-utilization of a software product. See also efficiency testing. The process of testing to determine the efficiency of a software product.






32. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]






33. The capability of the software product to enable the user to understand whether the software is suitable - and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability. The capability of the software to be understo






34. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.






35. The process of developing and prioritizing test procedures - creating test data and - optionally - preparing test harnesses and writing automated test scripts.






36. A model structure wherein attaining the goals of a set of process areas establishes a maturity level; each level builds a foundation for subsequent levels. [CMMI]






37. The process of identifying risks using techniques such as brainstorming - checklists and failure history.






38. A set of input values - execution preconditions - expected results and execution postconditions - developed for a particular objective or test condition - such as to exercise a particular program path or to verify compliance with a specific requireme






39. A human action that produces an incorrect result. [After IEEE 610]






40. The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.






41. The total costs incurred on quality activities and issues and often split into prevention costs - appraisal costs - internal failure costs and external failure costs.






42. The tracing of requirements through the layers of development documentation to components.






43. Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]






44. A high level metric of effectiveness and/or efficiency used to guide and control progressive development - e.g. lead-time slip for software development. [CMMI]






45. A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository - e.g. requirements management tool - from specified test conditions held in the tool itself - or from code.






46. A tool that provides support for the identification and control of configuration items - their status over changes and versions - and the release of baselines consisting of configuration items.






47. Testing conducted to evaluate a component or system in its operational environment. [IEEE 610]






48. The capability of the software product to adhere to standards - conventions or regulations in laws and similar prescriptions. [ISO 9126]






49. A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item - control changes to those characteristics - record and report change processi






50. The process of evaluating behavior - e.g. memory performance - CPU usage - of a system or component during execution. [After IEEE 610]