Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A device or storage area used to store data temporarily for differences in rates of data flow - time or occurrence of events - or amounts of data that can be handeld by the devices or processes involved in the transfer or use of the data. [IEEE 610]






2. A set of exit criteria.






3. Operational testing in the acceptance test phase - typically performed in a simulated real-life operational environment by operator and/or administrator focusing on operational aspects - e.g. recoverability - resource-behavior - installability and te






4. The insertion of additional code into the program in order to collect information about program behavior during execution - e.g. for measuring code coverage.






5. The process of testing to determine the maintainability of a software product.






6. A set of input values - execution preconditions - expected results and execution postconditions - developed for a particular objective or test condition - such as to exercise a particular program path or to verify compliance with a specific requireme






7. The process of recognizing - investigating - taking action and disposing of defects. It involves recording defects - classifying them and identifying the impact. [After IEEE 1044]






8. A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.






9. The last executable statement within a component.






10. A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.






11. Testing that involves the execution of the software of a component or system.






12. Testing the methods and processes used to access and manage the data(base) - to ensure access methods - processes and data rules function as expected and that during access to the database - data is not corrupted or unexpectedly deleted - updated or






13. The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software life cycle typically includes a concept phase - requirements phase - design phase - implementation phase - tes






14. Testing the changes to an operational system or the impact of a changed environment to an operational system.






15. A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.






16. Testing that runs test cases that failed the last time they were run - in order to verify the success of corrective actions.






17. A tool that carries out static analysis.






18. The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]






19. A set of interrelated activities - which transform inputs into outputs. [ISO 12207]






20. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






21. Any condition that deviates from expectation based on requirements specifications - design documents - user documents - standards - etc. or from someone's perception or experience. Anomalies may be found during - but not limited to - reviewing - test






22. A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract - standard - specification - or other formally imposed document. [After IEEE 610






23. A five level staged framework for test process improvement - related to the Capability Maturity Model Integration (CMMI) - that describes the key elements of an effective test process.






24. An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review - informal review - technical review - inspection - and walkthrough. [After IEEE 1028]






25. The set of generic and specific conditions - agreed upon with the stakeholders - for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding p






26. An input for which the specification predicts a result.






27. An executable statement where a variable is assigned a value.






28. A tool that provides support for the identification and control of configuration items - their status over changes and versions - and the release of baselines consisting of configuration items.






29. A model structure wherein attaining the goals of a set of process areas establishes a maturity level; each level builds a foundation for subsequent levels. [CMMI]






30. Two persons - e.g. two testers - a developer and a tester - or an end-user and a tester - working together to find defects. Typically - they share one computer and trade control of it while testing.






31. The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]






32. A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. [Gilb and Graham - IEEE 1028] See also peer review. A review of a software work product by colleagues of the producer of the product for the p






33. A Linear Code Sequence And Jump - consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements - the end of the linear sequence - and the targe






34. The process of evaluating behavior - e.g. memory performance - CPU usage - of a system or component during execution. [After IEEE 610]






35. The percentage of all single condition outcomes that independently affect a decision outcome that have been exercised by a test case suite. 100% condition determination coverage implies 100% decision condition coverage.






36. An element of configuration management - consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation. [IEEE 610]






37. A path by which the original input to a process (e.g. data) can be traced back through the process - taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]






38. Testing to determine the robustness of the software product.






39. Procedure to derive and/or select test cases for nonfunctional testing based on an analysis of the specification of a component or system without reference to its internal structure. See also black box test design technique. Procedure to derive and/o






40. A test approach in which the test suite comprises all combinations of input values and preconditions.






41. A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made - and to design tests specifically to expose them.






42. Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]






43. A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository - e.g. requirements management tool - from specified test conditions held in the tool itself - or from code.






44. A software product that supports one or more test activities - such as planning and control - specification - building initial files and data - test execution and test analysis. [TMap] See also CAST. Acronym for Computer Aided Software Testing.






45. The capability of the software product to be attractive to the user. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]






46. A risk directly related to the test object. See also risk. A factor that could result in future negative consequences; usually expressed as impact and likelihood.






47. A tool used to check that no brtoken hyperlinks are present on a web site.






48. The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.






49. Choosing a set of input values to force the execution of a given path.






50. The degree of uniformity - standardization - and freedom from contradiction among the documents or parts of a component or system. [IEEE 610]