Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The activity of establishing or updating a test plan.






2. Recording the details of any incident that occurred - e.g. during testing.






3. A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.






4. The process of running a test on the component or system under test - producing actual result(s).






5. The degree to which a component or system has a design and/or internal structure that is difficult to understand - maintain and verify. See also cyclomatic complexity. The number of independent paths through a program. Cyclomatic complexity is define






6. The testing of individual software components. [After IEEE 610]






7. A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]






8. Execution of a test on a specific version of the test object.






9. Acronym for Computer Aided Software Engineering.






10. The process of testing to determine the resource-utilization of a software product. See also efficiency testing. The process of testing to determine the efficiency of a software product.






11. A development life cycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product - a subset of the final product under dev






12. A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract - standard - specification - or other formally imposed document. [After IEEE 610






13. A risk related to management and control of the (test) project - e.g. lack of staffing - strict deadlines - changing requirements - etc. See also risk. A factor that could result in future negative consequences; usually expressed as impact and likeli






14. The process of testing to determine the recoverability of a software product. See also reliability testing. The process of testing to determine the reliability of a software product.






15. The ease with which a software product can be modified to correct defects - modified to meet new requirements - modified to make future maintenance easier - or adapted to a changed environment. [ISO 9126]






16. A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.






17. The process of confirming that a component - system or person complies with its specified requirements - e.g. by passing an exam.






18. A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators. See also high level te






19. The process of testing to determine the portability of a software product.






20. A sequence of events - e.g. executable statements - of a component or system from an entry point to an exit point.






21. A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements - as opposed to the integration of components by levels of a hierarchy.






22. The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]






23. The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms - e.g. lines-of-code - number of classes or function points).






24. An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.






25. Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems - database management systems - and other applications.






26. A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process. [After TMap]






27. A logical expression that can be evaluated as True or False - e.g. A>B. See also test condition. An item or event of a component or system that could be verified by one or more test cases - e.g. a function - transaction - feature - quality attribute






28. An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests. [After Bach]






29. A white box test design technique in which test cases are designed to execute LCSAJs.






30. An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).






31. A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases. See also operational profile testing. Statistical testing using a model of system operations (short duration tasks)






32. Testing based on an analysis of the internal structure of the component or system.






33. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]






34. Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]






35. An attribute of a test indicating whether the same results are produced each time the test is executed.






36. Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.






37. The consequence/outcome of the execution of a test. It includes outputs to screens - changes to data - reports - and communication messages sent out.






38. The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).






39. A superior method or innovative practice that contributes to the improved performance of an organization under given context - usually recognized as 'best' by other peer organizations.






40. [Beizer] A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






41. A set of exit criteria.






42. An element of configuration management - consisting of the evaluation - co-ordination - approval or disapproval - and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]






43. Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so - which test cases are needed.






44. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]






45. The ability to identify related items in documentation and software - such as requirements with associated tests. See also horizontal traceability - vertical traceability. The tracing of requirements for a test level through the layers of test docume






46. A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions. [Chow] See also state transition testing. A black box test design technique in which test cases are designed to execute valid and i






47. A tool that carries out static code analysis. The tool checks source code - for certain properties such as conformance to coding standards - quality metrics or data flow anomalies.






48. A minimal software item that can be tested in isolation.






49. A chronological record of relevant details about the execution of tests. [IEEE 829]






50. An environment containing hardware - instrumentation - simulators - software tools - and other support elements needed to conduct a test. [After IEEE 610]






Can you answer 50 questions in 15 minutes?



Let me suggest you:



Major Subjects



Tests & Exams


AP
CLEP
DSST
GRE
SAT
GMAT

Most popular tests