Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A method to determine test suite thoroughness by measuring the extent to which a test suite can discriminate the program from slight variants (mutants) of the program.






2. An approach to testing to reduce the level of product risks and inform stakeholders on their status - starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.






3. The process of testing the installability of a software product. See also portability testing. The process of testing to determine the portability of a software product.






4. A software product that is developed for the general market - i.e. for a large number of customers - and that is delivered to many customers in identical format.






5. An occurrence in which one defect prevents the detection of another. [After IEEE 610]






6. A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence. See also Failure Mode - Effect and Criticality Analysis (FMECA).






7. Testing by means of a random selection from a large range of inputs and by randomly pushing buttons - ignorant on how the product is being used.






8. The testing of individual software components. [After IEEE 610]






9. Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black






10. The capability of the software product to be installed in a specified environment [ISO 9126]. See also portability. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]






11. The planning - estimating - monitoring and control of test activities - typically carried out by a test manager.






12. All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure - then the test basis is called a frozen t






13. A type of peer review that relies on visual examination of documents to detect defects - e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a docu






14. A high level metric of effectiveness and/or efficiency used to guide and control progressive development - e.g. lead-time slip for software development. [CMMI]






15. A variable (whether stored within a component or outside) that is written by a component.






16. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






17. A tool that carries out static analysis.






18. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues






19. A technique used to analyze the causes of faults (defects). The technique visually models how logical relationships between failures - human errors - and external events can combine to cause specific faults to disclose.






20. An expert based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.






21. Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.






22. Operational testing in the acceptance test phase - typically performed in a simulated real-life operational environment by operator and/or administrator focusing on operational aspects - e.g. recoverability - resource-behavior - installability and te






23. The capability of the software product to be used in place of another specified software product for the same purpose in the same environment. [ISO 9126] See also portability. The ease with which the software product can be transferred from one hardw






24. A software tool used to carry out instrumentation.






25. The process of testing to determine the efficiency of a software product.






26. The criteria used to (temporarily) stop all or a portion of the testing activities on the test items. [After IEEE 829]






27. Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site - but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acce






28. A black box test design technique in which test cases are designed from cause-effect graphs. [BS 7925/2]






29. Testing of a component or system at specification or implementation level without execution of that software - e.g. reviews or static code analysis.






30. Testing based on an analysis of the internal structure of the component or system.






31. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






32. A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of defects and provide reporting facilities.






33. A test approach in which the test suite comprises all combinations of input values and preconditions.






34. An entity in a programming language - which is typically the smallest indivisible unit of execution.






35. A test plan that typically addresses multiple test levels. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the






36. Testing using input values that should be rejected by the component or system. See also error tolerance. The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].






37. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






38. The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]






39. A five level staged framework for test process improvement - related to the Capability Maturity Model Integration (CMMI) - that describes the key elements of an effective test process.






40. A tool for seeding (i.e. intentionally inserting) faults in a component or system.






41. A systematic evaluation of software acquisition - supply - development - operation - or maintenance process - performed by or on behalf of management that monitors progress - determines the status of plans and schedules - confirms requirements and th






42. The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. [ISO 9126] See also reliability. The ability of the software product to perform its required functions






43. A test tool to perform automated test comparison of actual results with expected results.






44. A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects) - which can be used to design test cases.






45. A tool that carries out static analysis.






46. The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]






47. Testing in which two or more variants of a component or system are executed with the same inputs - the outputs compared - and analyzed in cases of discrepancies. [IEEE 610]






48. A white box test design technique in which test cases are designed to execute branches.






49. Two persons - e.g. two testers - a developer and a tester - or an end-user and a tester - working together to find defects. Typically - they share one computer and trade control of it while testing.






50. The behavior produced/observed when a component or system is tested.







Sorry!:) No result found.

Can you answer 50 questions in 15 minutes?


Let me suggest you:



Major Subjects



Tests & Exams


AP
CLEP
DSST
GRE
SAT
GMAT

Most popular tests