Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.






2. A software product that supports one or more test activities - such as planning and control - specification - building initial files and data - test execution and test analysis. [TMap] See also CAST. Acronym for Computer Aided Software Testing.






3. A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. See also test management. The planning - estimating - monito






4. Testing using input values that should be rejected by the component or system. See also error tolerance. The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].






5. The period of time in a software development life cycle during which the components of a software product are executed - and the software product is evaluated to determine whether or not requirements have been satisfied. [IEEE 610]






6. A device or storage area used to store data temporarily for differences in rates of data flow - time or occurrence of events - or amounts of data that can be handeld by the devices or processes involved in the transfer or use of the data. [IEEE 610]






7. Any event occurring that requires investigation. [After IEEE 1008]






8. Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity - the estimation of the needed resources -






9. A technique used to analyze the causes of faults (defects). The technique visually models how logical relationships between failures - human errors - and external events can combine to cause specific faults to disclose.






10. The set of generic and specific conditions for permitting a process to go forward with a defined task - e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort n






11. A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).






12. A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators. See also high level te






13. Formal or informal testing conducted during the implementation of a component or system - usually in the development environment by developers. [After IEEE 610]






14. A technique used to characterize the elements of risk. The result of a hazard analysis will drive the methods used for development and testing of a system. See also risk analysis. The process of assessing identified risks to estimate their impact and






15. Attributes of software products that bear on its ability to prevent unauthorized access - whether accidental or deliberate - to programs and data. [ISO 9126] See also functionality. The capability of the software product to provide functions which me






16. Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.






17. A source of a defect such that if it is removed - the occurance of the defect type is decreased or removed. [CMMI]






18. The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.






19. Execution of the test process against a single identifiable release of the test object.






20. An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements - e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.






21. A measurement scale and the method used for measurement. [ISO 14598]






22. The percentage of equivalence partitions that have been exercised by a test suite.






23. A minimal software item that can be tested in isolation.






24. A formula based test estimation method based on function point analysis. [TMap]






25. The percentage of combinations of all single condition outcomes within one statement that have been exercised by a test suite. 100% multiple condition coverage implies 100% condition determination coverage.






26. The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out - starting points regarding the test process - the test design






27. Testing the attributes of a component or system that do not relate to functionality - e.g. reliability - efficiency - usability - maintainability and portability.






28. A computational model consisting of a finite number of states and transitions between those states - possibly with accompanying actions. [IEEE 610]






29. The process of testing to determine the resource-utilization of a software product. See also efficiency testing. The process of testing to determine the efficiency of a software product.






30. The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]






31. The process of testing to determine the interoperability of a software product. See also functionality testing. The process of testing to determine the functionality of a software product.






32. A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark) - a user-manual - or an individual's specialized knowledge - but should not be the code. [Afte






33. The capability of the software product to enable the user to understand whether the software is suitable - and how it can be used for particular tasks and conditions of use. [ISO 9126] See also usability. The capability of the software to be understo






34. Comparison of actual and expected results - performed after the software has finished running.






35. A review not based on a formal (documented) procedure.






36. The capability of the software product to be diagnosed for deficiencies or causes of failures in the software - or for the parts to be modified to be identified. [ISO 9126] See also maintainability. The ease with which a software product can be modif






37. An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).






38. A specification or software product that has been formally reviewed or agreed upon - that thereafter serves as the basis for further development - and that can be changed only through a formal change control process. [After IEEE 610]






39. Testing by means of a random selection from a large range of inputs and by randomly pushing buttons - ignorant on how the product is being used.






40. An abstract representation of all possible sequences of events (paths) in the execution through a component or system.






41. The use of software - e.g. capture/playback tools - to control the execution of tests - the comparison of actual results to expected results - the setting up of test preconditions - and other test control and reporting functions.






42. A Linear Code Sequence And Jump - consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements - the end of the linear sequence - and the targe






43. A variable (whether stored within a component or outside) that is written by a component.






44. A document summarizing testing activities and results - produced at regular intervals - to report progress of testing activities against a baseline (such as the original test plan) and to communicate risks and alternatives requiring a decision to man






45. Acronym for Computer Aided Software Engineering.






46. A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]






47. A pointer within a web page that leads to other web pages.






48. The percentage of LCSAJs of a component that have been exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.






49. A five level staged framework for test process improvement - related to the Capability Maturity Model (CMM) - that describes the key elements of an effective test process.






50. The importance of a risk as defined by its characteristics impact and likelihood. The level of risk can be used to determine the intensity of testing to be performed. A risk level can be expressed either qualitatively (e.g. high - medium - low) or qu






Can you answer 50 questions in 15 minutes?



Let me suggest you:



Major Subjects



Tests & Exams


AP
CLEP
DSST
GRE
SAT
GMAT

Most popular tests