Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The capability of the software product to be installed in a specified environment [ISO 9126]. See also portability. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]






2. An independent evaluation of software products or processes to ascertain compliance to standards - guidelines - specifications - and/or procedures based on objective criteria - including documents that specify: (1) the form or content of the products






3. An incremental approach to integration testing where the component at the top of the component hierarchy is tested first - with lower level components being simulated by stubs. Tested components are then used to test lower level components. The proce






4. The process of testing to determine the resource-utilization of a software product. See also efficiency testing. The process of testing to determine the efficiency of a software product.






5. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues






6. Acronym for Computer Aided Software Engineering.






7. A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects) - which can be used to design test cases.






8. A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads - or with reduced availability of resources such as access to memory or servers. [After IEEE 610] See also pe






9. An integration test type that is concerned with testing the interfaces between components or systems.






10. The behavior produced/observed when a component or system is tested.






11. A black box test design technique where test cases are selected - possibly using a pseudo-random generation algorithm - to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performa






12. Testing practice for a project using agile methodologies - such as extreme programming (XP) - treating development as the customer of testing and emphasizing the test first design paradigm. See also test driven development. A way of developing softwa






13. Deviation of the component or system from its expected delivery - service or result. [After Fenton]






14. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.






15. The evaluation of a condition to True or False.






16. Formal or informal testing conducted during the implementation of a component or system - usually in the development environment by developers. [After IEEE 610]






17. A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. See also test management. The planning - estimating - monito






18. A type of performance testing conducted to evaluate the behavior of a component or system with increasing load - e.g. numbers of parallel users and/or numbers of transactions - to determine what load can be handled by the component or system. See als






19. Testing of individual components in isolation from surrounding components - with surrounding components being simulated by stubs and drivers - if needed.






20. The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software life cycle typically includes a concept phase - requirements phase - design phase - implementation phase - tes






21. A white box test design technique in which test cases are designed to execute branches.






22. A description of a component's function in terms of its output values for specified input values under specified conditions - and required non-functional behavior (e.g. resource utilization).






23. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]






24. Testing to determine the robustness of the software product.






25. A procedure to derive and/or select test cases targeted at one or more defect categories - with tests being developed from what is known about the specific defect category. See also defect taxonomy. A system of (hierarchical) categories designed to b






26. A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]






27. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]






28. Any (work) product that must be delivered to someone other than the (work) product's author.






29. (1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. See also Capability Maturity Model - Test Maturity Model. (2) The capability of the software product to avoid failure as a res






30. A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.






31. Testing to determine the safety of a software product.






32. The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).






33. A sequence of one or more consecutive executable statements containing no branches. Note: A node in a control flow graph represents a basic block.






34. A group of test activities aimed at testing a component or system focused on a specific test objective - i.e. functional test - usability test - regression test etc. A test type may take place on one or more test levels or test phases. [After TMap]






35. Execution of a test on a specific version of the test object.






36. Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.






37. A skilled professional who is involved in the testing of a component or system.






38. A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements - as opposed to the integration of components by levels of a hierarchy.






39. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the testing tasks - who will do each task - degree of tester independence - the tes






40. The level of (business) importance assigned to an item - e.g. defect.






41. A set of input values - execution preconditions - expected results and execution postconditions - developed for a particular objective or test condition - such as to exercise a particular program path or to verify compliance with a specific requireme






42. A tool that provides objective measures of what structural elements - e.g. statements - branches have been exercised by a test suite.






43. Testing to determine the extent to which the software product is understood - easy to learn - easy to operate and attractive to the users under specified conditions. [After ISO 9126]






44. The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.






45. A requirement that specifies a function that a component or system must perform. [IEEE 610]






46. The capability of the software product to be adapted for different specified environments without applying actions or means other than those provided for this purpose for the software considered. [ISO 9126] See also portability. The ease with which t






47. A sequence of transactions in a dialogue between a user and the system with a tangible result.






48. A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.






49. Any event occurring that requires investigation. [After IEEE 1008]






50. The percentage of boundary values that have been exercised by a test suite.