Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Acceptance testing by users/customers at their site - to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes - normally including hardware as well as software.






2. An entity in a programming language - which is typically the smallest indivisible unit of execution.






3. A pointer that references a location that is out of scope for that pointer or that does not exist. See also pointer. A data item that specifies the location of another data item; for example - a data item that specifies the address of the next employ






4. A test plan that typically addresses multiple test levels. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the






5. A white box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decision outcome.






6. A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]






7. Comparison of actual and expected results - performed after the software has finished running.






8. A device - computer program - or system that accepts the same inputs and produces the same outputs as a given system. [IEEE 610] See also simulator. A device - computer program or system used during testing - which behaves or operates like a given sy






9. A test environment comprised of stubs and drivers needed to execute a test.






10. The last executable statement within a component.






11. A type of integration testing in which software elements - hardware elements - or both are combined all at once into a component or an overall system - rather than in stages. [After IEEE 610] See also integration testing. Testing performed to expose






12. The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]






13. An entity or property used as a basis for test coverage - e.g. equivalence partitions or code statements.






14. A diagram that depicts the states that a component or system can assume - and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]






15. The process of recognizing - investigating - taking action and disposing of defects. It involves recording defects - classifying them and identifying the impact. [After IEEE 1044]






16. A high level document describing the principles - approach and major objectives of the organization regarding testing.






17. Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange - Internet).






18. The use of software to perform or support test activities - e.g. test management - test design - test execution and results checking.






19. A white box test design technique in which test cases are designed to execute LCSAJs.






20. The percentage of executable statements that have been exercised by a test suite.






21. A document specifying the test conditions (coverage items) for a test item - the detailed test approach and identifying the associated high level test cases. [After IEEE 829]






22. The capability of the software product to provide appropriate performance - relative to the amount of resources used under stated conditions. [ISO 9126]






23. A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects) - which can be used to design test cases.






24. An expert based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.






25. An uninterrupted period of time spent in executing tests. In exploratory testing - each test session is focused on a charter - but testers can also explore new opportunities or issues during a session. The tester creates and executes test cases on th






26. The process of testing to determine the recoverability of a software product. See also reliability testing. The process of testing to determine the reliability of a software product.






27. A requirement that specifies a function that a component or system must perform. [IEEE 610]






28. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]






29. The process of testing to determine the recoverability of a software product.






30. A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators. See also high level te






31. A minimal software item that can be tested in isolation.






32. Supplied instructions on any suitable media - which guides the installer through the installation process. This may be a manual guide - step-by-step procedure - installation wizard - or any other similar process description.






33. Part of quality management focused on providing confidence that quality requirements will be fulfilled. [ISO 9000]






34. The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].






35. The activity of establishing or updating a test plan.






36. A document that specifies - ideally in a complete - precise and verifiable manner - the requirements - design - behavior - or other characteristics of a component or system - and - often - the procedures for determining whether these provisions have






37. The capability of the software product to provide an appropriate set of functions for specified tasks and user objectives. [ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied ne






38. A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table. [Veenendaal] See also decision table. A table showing combinations of inputs and/or stimuli (c






39. Testing based on an analysis of the internal structure of the component or system.






40. Testing that runs test cases that failed the last time they were run - in order to verify the success of corrective actions.






41. An abstract representation of the sequence and possible changes of the state of data objects - where the state of an object is any of: creation - usage - or destruction. [Beizer]






42. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]






43. The process of testing to determine the reliability of a software product.






44. Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]






45. A path that cannot be exercised by any set of possible input values.






46. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






47. The process of combining components or systems into larger assemblies.






48. A test result in which a defect is reported although no such defect actually exists in the test object.






49. The calculated approximation of a result (e.g. effort spent - completion date - costs involved - number of test cases - etc.) which is usable even if input data may be incomplete - uncertain - or noisy.






50. Testing the attributes of a component or system that do not relate to functionality - e.g. reliability - efficiency - usability - maintainability and portability.