Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A program of activities designed to improve the performance and maturity of the organization's processes - and the result of such a program. [CMMI]






2. Testing carried out informally; no formal test preparation takes place - no recognized test design technique is used - there are no expectations for results and arbitrariness guides the test execution activity.






3. A tool that provides support to the review process. Typical features include review planning and tracking support - communication support - collaborative reviews and a repository for collecting and reporting of metrics.






4. A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management. The planning - estimating - monitoring and co






5. Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.






6. A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.






7. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]






8. The period of time in a software development life cycle during which the components of a software product are executed - and the software product is evaluated to determine whether or not requirements have been satisfied. [IEEE 610]






9. All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure - then the test basis is called a frozen t






10. A factor that could result in future negative consequences; usually expressed as impact and likelihood.






11. The assessment of change to the layers of development documentation - test documentation and components - in order to implement a given change to specified requirements.






12. Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives - quality planning - quality control -






13. Testing to determine the ease by which users with disabilities can use a component or system. [Gerrard]






14. A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers - check pointer arithmetic and to monitor the allocation - use and de-allocation of memory and to flag mem






15. A development life cycle where a project is broken into a series of increments - each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the approp






16. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






17. Testing that involves the execution of the software of a component or system.






18. Coverage measures based on the internal structure of a component or system.






19. An attribute of a test indicating whether the same results are produced each time the test is executed.






20. The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability. The ability of the software product to perform its required functions






21. Any (work) product that must be delivered to someone other than the (work) product's author.






22. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]






23. A measurement scale and the method used for measurement. [ISO 14598]






24. A tool that supports the validation of models of the software or system [Graham].






25. A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






26. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






27. A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements - as opposed to the integration of components by levels of a hierarchy.






28. Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]






29. A device - computer program or system used during testing - which behaves or operates like a given system when provided with a set of controlled inputs. [After IEEE 610 - DO178b] See also emulator. A device - computer program - or system that accepts






30. Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange - Internet).






31. Testing performed to expose defects in the interfaces and interaction between integrated components.






32. The exit criteria that a component or system must satisfy in order to be accepted by a user - customer - or other authorized entity. [IEEE 610]






33. The process of running a test on the component or system under test - producing actual result(s).






34. Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing - system integration testing. Testing performed to expose defects in the interfaces and int






35. Testing using input values that should be rejected by the component or system. See also error tolerance. The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].






36. A continuous framework for test process improvement that describes the key elements of an effective test process - especially targeted at system testing and acceptance testing.






37. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black-box test design technique Procedure to derive and/or select test cases based on an analysis of the specification - either fu






38. The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. [ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied ne






39. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]






40. A diagram that depicts the states that a component or system can assume - and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]






41. A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators. See also high level te






42. The activity of establishing or updating a test plan.






43. A computational model consisting of a finite number of states and transitions between those states - possibly with accompanying actions. [IEEE 610]






44. The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO






45. A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.






46. A document reporting on any event that occurred - e.g. during the testing - which requires investigation. [After IEEE 829]






47. An attribute of a component or system specified or implied by requirements documentation (for example reliability - usability or design constraints). [After IEEE 1008]






48. A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution - response time measu






49. The set of generic and specific conditions - agreed upon with the stakeholders - for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding p






50. Testing to determine the robustness of the software product.