Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Two persons - e.g. two testers - a developer and a tester - or an end-user and a tester - working together to find defects. Typically - they share one computer and trade control of it while testing.






2. A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. [Gilb and Graham - IEEE 1028] See also peer review. A review of a software work product by colleagues of the producer of the product for the p






3. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]






4. Analysis of software artifacts - e.g. requirements or code - carried out without execution of these software artifacts.






5. A set of exit criteria.






6. A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.






7. Behavior of a component or system in response to erroneous input - from either a human user or from another component or system - or to an internal failure.






8. Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.






9. Choosing a set of input values to force the execution of a given path.






10. The percentage of boundary values that have been exercised by a test suite.






11. A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-tim code reviews are performed.






12. The process of running a test on the component or system under test - producing actual result(s).






13. The degree of uniformity - standardization - and freedom from contradiction among the documents or parts of a component or system. [IEEE 610]






14. A review not based on a formal (documented) procedure.






15. The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.






16. Acronym for Commercial Off-The-Shelf software. See off-the-shelf software. A software product that is developed for the general market - i.e. for a large number of customers - and that is delivered to many customers in identical format.






17. Test execution carried out by following a previously documented sequence of tests.






18. A document that consists of a test design specification - test case specification and/or test procedure specification.






19. The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO






20. A model structure wherein attaining the goals of a set of process areas establishes a maturity level; each level builds a foundation for subsequent levels. [CMMI]






21. The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions. [ISO 9126]






22. The person who records each defect mentioned and any suggestions for process improvement during a review meeting - on a logging form. The scribe has to ensure that the logging form is readable and understandable.






23. A test tool to perform automated test comparison of actual results with expected results.






24. An instance of an output. See also output.A variable (whether stored within a component or outside) that is written by a component.






25. The process of testing to determine the efficiency of a software product.






26. A sequence of one or more consecutive executable statements containing no branches. Note: A node in a control flow graph represents a basic block.






27. The percentage of executable statements that have been exercised by a test suite.






28. The exit criteria that a component or system must satisfy in order to be accepted by a user - customer - or other authorized entity. [IEEE 610]






29. A high level metric of effectiveness and/or efficiency used to guide and control progressive development - e.g. lead-time slip for software development. [CMMI]






30. Non fulfillment of a specified requirement. [ISO 9000]






31. A scripting technique that stores test input and expected results in a table or spreadsheet - so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution to






32. The behavior produced/observed when a component or system is tested.






33. A programming language in which executable test scripts are written - used by a test execution tool (e.g. a capture/playback tool).






34. Acronym for Computer Aided Software Engineering.






35. A test design technique in which a model of the statistical distribution of the input is used to construct representative test cases. See also operational profile testing. Statistical testing using a model of system operations (short duration tasks)






36. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers - to determine whether or not a component or system satisfies the user/customer needs and fits within the business process






37. The degree of impact that a defect has on the development or operation of a component or system. [After IEEE 610]






38. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]






39. A type of performance testing conducted to evaluate the behavior of a component or system with increasing load - e.g. numbers of parallel users and/or numbers of transactions - to determine what load can be handled by the component or system. See als






40. Data that exists (for example - in a database) before a test is executed - and that affects or is affected by the component or system under test.






41. A statement which - when compiled - is translated into object code - and which will be executed procedurally when the program is running and may perform an action on data.






42. The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied needs when the sof






43. The set from which valid output values can be selected. See also domain. The set from which valid input and/or output values can be selected.






44. A static usability test technique to determine the compliance of a user interface with recognized usability principles (the so-called "heuristics").






45. Testing aimed at ensuring that the component or system can operate in conjunction with new or existing users' business procedures or operational procedures.






46. Testing of software or specification by manual simulation of its execution. See also static analysis. Analysis of software artifacts - e.g. requirements or code - carried out without execution of these software artifacts.






47. The process of finding - analyzing and removing the causes of failures in software.






48. The assessment of change to the layers of development documentation - test documentation and components - in order to implement a given change to specified requirements.






49. A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. [After TMap]






50. Testing that involves the execution of the software of a component or system.