Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A series which appears to be random but is in fact generated according to some prearranged sequence.






2. A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection - technical review and walkthrough.






3. A form of static analysis based on the definition and usage of variables.






4. The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.






5. A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






6. A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made - and to design tests specifically to expose them.






7. Comparison of actual and expected results - performed after the software has finished running.






8. The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path ("predicate" use).






9. The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out - starting points regarding the test process - the test design






10. The process of identifying differences between the actual results produced by the component or system under test and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison) or after test execution.






11. The percentage of LCSAJs of a component that have been exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.






12. The assessment of change to the layers of development documentation - test documentation and components - in order to implement a given change to specified requirements.






13. A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).






14. Computer instructions and data definitions expressed in a programming language or in a form output by an assembler - compiler or other translator. [IEEE 610]






15. The process of demonstrating the ability to fulfill specified requirements. Note the term 'qualified' is used to designate the corresponding status. [ISO 9000]






16. A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. See also test management. The planning - estimating - monito






17. The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]






18. Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]






19. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]






20. A tool that provides support to the review process. Typical features include review planning and tracking support - communication support - collaborative reviews and a repository for collecting and reporting of metrics.






21. Multiple heterogeneous - distributed systems that are embedded in networks at multiple levels and in multiple domains interconnected addressing large-scale inter-disciplinary common problems and purposes.






22. Testing to determine the scalability of the software product.






23. The process of testing to determine the resource-utilization of a software product. See also efficiency testing. The process of testing to determine the efficiency of a software product.






24. A model that shows the growth in reliability over time during continuous testing of a component or system as a result of the removal of defects that result in reliability failures.






25. A document specifying the test conditions (coverage items) for a test item - the detailed test approach and identifying the associated high level test cases. [After IEEE 829]






26. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black-box test design technique Procedure to derive and/or select test cases based on an analysis of the specification - either fu






27. A defect in a program's dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it - eventually causing the program to fail due to lack of memory.






28. A path that cannot be exercised by any set of possible input values.






29. The process of testing to determine the maintainability of a software product.






30. A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process. [After TMap]






31. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues






32. An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review - informal review - technical review - inspection - and walkthrough. [After IEEE 1028]






33. The process of testing to determine the interoperability of a software product.






34. A software tool that translates programs expressed in a high order language into their machine language equivalents. [IEEE 610]






35. An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing. Testing performed to expose defects in the interfaces and in the interactions between integr






36. Coverage measures based on the internal structure of a component or system.






37. The process of testing an integrated system to verify that it meets specified requirements. [Hetzel]






38. The process of running a test on the component or system under test - producing actual result(s).






39. The behavior produced/observed when a component or system is tested.






40. The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.






41. The percentage of definition-use pairs that have been exercised by a test suite.






42. A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning - engineering and managing product development and maintenance. CMMI






43. The percentage of equivalence partitions that have been exercised by a test suite.






44. A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. [After IEEE 829]






45. An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed - e.g. statement coverage - decision coverage or condition coverage.






46. A white box test design technique in which test cases are designed to execute LCSAJs.






47. The process of recognizing - investigating - taking action and disposing of incidents. It involves logging incidents - classifying them and identifying the impact. [After IEEE 1044]






48. A type of peer review that relies on visual examination of documents to detect defects - e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a docu






49. A minimal software item that can be tested in isolation.






50. A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark) - a user-manual - or an individual's specialized knowledge - but should not be the code. [Afte






Can you answer 50 questions in 15 minutes?



Let me suggest you:



Major Subjects



Tests & Exams


AP
CLEP
DSST
GRE
SAT
GMAT

Most popular tests