Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.






2. A project is a unique set of coordinated and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements - including the constraints of time - cost and resources. [ISO 9000]






3. The process of testing to determine the interoperability of a software product. See also functionality testing. The process of testing to determine the functionality of a software product.






4. A document summarizing testing activities and results - produced at regular intervals - to report progress of testing activities against a baseline (such as the original test plan) and to communicate risks and alternatives requiring a decision to man






5. A tool that provides support for the identification and control of configuration items - their status over changes and versions - and the release of baselines consisting of configuration items.






6. A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of defects and provide reporting facilities. See also incid






7. Artifacts produced during the test process required to plan - design - and execute tests - such as documentation - scripts - inputs - expected results - set-up and clear-up procedures - files - databases - environment - and any additional software or






8. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]






9. A black box test design technique where test cases are selected - possibly using a pseudo-random generation algorithm - to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performa






10. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






11. The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing. Testing based on an analysis of the specification of the functionality of a compo






12. Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so - which test cases are needed.






13. Testing aimed at ensuring that the component or system can operate in conjunction with new or existing users' business procedures or operational procedures.






14. A logical expression that can be evaluated as True or False - e.g. A>B. See also test condition. An item or event of a component or system that could be verified by one or more test cases - e.g. a function - transaction - feature - quality attribute






15. The process consisting of all life cycle activities - both static and dynamic - concerned with planning - preparation and evaluation of software products and related work products to determine that they satisfy specified requirements - to demonstrate






16. A black box test design technique in which test cases are designed to execute all possbile discrete combinations of each pair of input parameters. See also orthogonal array testing. A systematic way of testing all-pair combinations of variables using






17. The person responsible for project management of testing activities and resources - and evaluation of a test object. The individual who directs - controls - administers - plans and regulates the evaluation of a test object.






18. A set of input values - execution preconditions - expected results and execution postconditions - developed for a particular objective or test condition - such as to exercise a particular program path or to verify compliance with a specific requireme






19. Testing carried out informally; no formal test preparation takes place - no recognized test design technique is used - there are no expectations for results and arbitrariness guides the test execution activity.






20. A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract - standard - specification - or other formally imposed document. [After IEEE 610






21. A test plan that typically addresses one test level. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testin






22. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.






23. A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark) - a user-manual - or an individual's specialized knowledge - but should not be the code. [Afte






24. A risk directly related to the test object. See also risk. A factor that could result in future negative consequences; usually expressed as impact and likelihood.






25. Testing to determine how the occurrence of two or more activities within the same interval of time - achieved either by interleaving the activities or by simultaneous execution - is handled by the component or system. [After IEEE 610]






26. A flaw in a component or system that can cause the component or system to fail to perform its required function - e.g. an incorrect statement or data definition. A defect - if encountered during execution - may cause a failure of the component or sys






27. A skilled professional who is involved in the testing of a component or system.






28. The consequence/outcome of the execution of a test. It includes outputs to screens - changes to data - reports - and communication messages sent out. See also actual result - expected result. The behavior produced/observed when a component or system






29. A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning - engineering and managing product development and maintenance. CMMI






30. A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.






31. A document specifying a set of test cases (objective - inputs - test actions - expected results - and execution preconditions) for a test item. [After IEEE 829]






32. The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]






33. A minimal software item that can be tested in isolation.






34. A tool that carries out static analysis.






35. An element of configuration management - consisting of the evaluation - co-ordination - approval or disapproval - and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]






36. The consequence/outcome of the execution of a test. It includes outputs to screens - changes to data - reports - and communication messages sent out.






37. The capability of the software product to provide appropriate performance - relative to the amount of resources used under stated conditions. [ISO 9126]






38. The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.






39. A data item that specifies the location of another data item; for example - a data item that specifies the address of the next employee record to be processed. [IEEE 610]






40. A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]






41. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]






42. The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage. [IEEE 610]






43. A tool for seeding (i.e. intentionally inserting) faults in a component or system.






44. Testing of software used to convert data from existing systems for use in replacement systems.






45. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]






46. The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]






47. The ratio of the number of failures of a given category to a given unit of measure - e.g. failures per unit of time - failures per number of transactions - failures per number of computer runs. [IEEE 610]






48. A black box test design technique in which test cases are designed based on boundary values. See also boundary value. An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either si






49. A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management. The planning - estimating - monitoring and co






50. The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied needs when the sof