Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. [Beizer] A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






2. The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.






3. The physical or functional manifestation of a failure. For example - a system in failure mode may be characterized by slow operation - incorrect outputs - or complete termination of execution. [IEEE 610]






4. An element of configuration management - consisting of the evaluation - co-ordination - approval or disapproval - and implementation of changes to configuration items after formal establishment of their configuration identification. [IEEE 610]






5. A Linear Code Sequence And Jump - consisting of the following three items (conventionally identified by line numbers in a source code listing): the start of the linear sequence of executable statements - the end of the linear sequence - and the targe






6. A questionnaire based usability test technique to evaluate the usability - e.g. user-satisfaction - of a component or system. [Veenendaal]






7. The process of developing and prioritizing test procedures - creating test data and - optionally - preparing test harnesses and writing automated test scripts.






8. A scripting technique that stores test input and expected results in a table or spreadsheet - so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution to






9. The percentage of LCSAJs of a component that have been exercised by a test suite. 100% LCSAJ coverage implies 100% decision coverage.






10. The process of testing to determine the performance of a software product. See also efficiency testing. The process of testing to determine the efficiency of a software product.






11. The testing activities that must be repeated when testing is re-started after a suspension. [After IEEE 829]






12. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






13. The process through which decisions are reached and protective measures are implemented for reducing risks to - or maintaining risks within - specified levels.






14. A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.






15. An approach to testing to reduce the level of product risks and inform stakeholders on their status - starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.






16. Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. [ISO 9000]






17. A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.






18. Recording the details of any incident that occurred - e.g. during testing.






19. Operational testing in the acceptance test phase - typically performed in a simulated real-life operational environment by operator and/or administrator focusing on operational aspects - e.g. recoverability - resource-behavior - installability and te






20. A tool that carries out static code analysis. The tool checks source code - for certain properties such as conformance to coding standards - quality metrics or data flow anomalies.






21. A software product that supports one or more test activities - such as planning and control - specification - building initial files and data - test execution and test analysis. [TMap] See also CAST. Acronym for Computer Aided Software Testing.






22. Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software - as a result of the changes made. It is performed when the software or its environment is c






23. A type of test tool that is able to execute other software using an automated test script - e.g. capture/playback. [Fewster and Graham]






24. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]






25. A white box test design technique in which test cases are designed to execute LCSAJs.






26. The process of identifying risks using techniques such as brainstorming - checklists and failure history.






27. A defect in a program's dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it - eventually causing the program to fail due to lack of memory.






28. The process of testing to determine the maintainability of a software product.






29. The process of recognizing - investigating - taking action and disposing of defects. It involves recording defects - classifying them and identifying the impact. [After IEEE 1044]






30. Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives - quality planning - quality control -






31. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]






32. The exit criteria that a component or system must satisfy in order to be accepted by a user - customer - or other authorized entity. [IEEE 610]






33. The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]






34. A program element is said to be exercised by a test case when the input value causes the execution of that element - such as a statement - decision - or other structural element.






35. An uninterrupted period of time spent in executing tests. In exploratory testing - each test session is focused on a charter - but testers can also explore new opportunities or issues during a session. The tester creates and executes test cases on th






36. Testing to determine the scalability of the software product.






37. A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning - engineering and managing product development and maintenance. CMMI






38. A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-tim code reviews are performed.






39. The set from which valid input and/or output values can be selected.






40. The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]






41. Testing in which two or more variants of a component or system are executed with the same inputs - the outputs compared - and analyzed in cases of discrepancies. [IEEE 610]






42. A software product that is developed for the general market - i.e. for a large number of customers - and that is delivered to many customers in identical format.






43. Testing of software or specification by manual simulation of its execution. See also static analysis. Analysis of software artifacts - e.g. requirements or code - carried out without execution of these software artifacts.






44. The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out - starting points regarding the test process - the test design






45. The first executable statement within a component.






46. The process of combining components or systems into larger assemblies.






47. The person responsible for project management of testing activities and resources - and evaluation of a test object. The individual who directs - controls - administers - plans and regulates the evaluation of a test object.






48. A development life cycle where a project is broken into a series of increments - each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the approp






49. Two or more single conditions joined by means of a logical operator (AND - OR or XOR) - e.g. 'A>B AND C>1000'.






50. During the test closure phase of a test process data is collected from completed activities to consolidate experience - testware - facts and numbers. The test closure phase consists of finalizing and archiving the testware and evaluating the test pro