Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The process of recognizing - investigating - taking action and disposing of incidents. It involves logging incidents - classifying them and identifying the impact. [After IEEE 1044]






2. Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.






3. A document that specifies - ideally in a complete - precise and verifiable manner - the requirements - design - behavior - or other characteristics of a component or system - and - often - the procedures for determining whether these provisions have






4. Testing where the system is subjected to large volumes of data. See also resource-utilization testing. The process of testing to determine the resource-utilization of a software product.






5. A form of static analysis based on a representation of sequences of events (paths) in the execution through a component or system.






6. The capability of the software product to interact with one or more specified components or systems. [After ISO 9126] See also functionality. The capability of the software product to provide functions which meet stated and implied needs when the sof






7. Testing that runs test cases that failed the last time they were run - in order to verify the success of corrective actions.






8. The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.






9. The process of testing to determine the interoperability of a software product.






10. The percentage of combinations of all single condition outcomes within one statement that have been exercised by a test suite. 100% multiple condition coverage implies 100% condition determination coverage.






11. An executable statement where a variable is assigned a value.






12. Testing that involves the execution of the software of a component or system.






13. Testing of software used to convert data from existing systems for use in replacement systems.






14. The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out - starting points regarding the test process - the test design






15. A group of test activities aimed at testing a component or system focused on a specific test objective - i.e. functional test - usability test - regression test etc. A test type may take place on one or more test levels or test phases. [After TMap]






16. Acronym for Computer Aided Software Engineering.






17. An expert based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.






18. The person who records each defect mentioned and any suggestions for process improvement during a review meeting - on a logging form. The scribe has to ensure that the logging form is readable and understandable.






19. A test result in which a defect is reported although no such defect actually exists in the test object.






20. A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.






21. A development activity where a complete system is compiled and linked every day (usually overnight) - so that a consistent system is available at any time including all latest changes.






22. A list of activities - tasks or events of the test process - identifying their intended start and finish dates and/or times - and interdependencies.






23. A specification or software product that has been formally reviewed or agreed upon - that thereafter serves as the basis for further development - and that can be changed only through a formal change control process. [After IEEE 610]






24. An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements - e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.






25. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.






26. The degree to which a component or system has a design and/or internal structure that is difficult to understand - maintain and verify. See also cyclomatic complexity. The number of independent paths through a program. Cyclomatic complexity is define






27. A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).






28. A system of (hierarchical) categories designed to be a useful aid for reproducibly classifying defects.






29. Testing the methods and processes used to access and manage the data(base) - to ensure access methods - processes and data rules function as expected and that during access to the database - data is not corrupted or unexpectedly deleted - updated or






30. Any condition that deviates from expectation based on requirements specifications - design documents - user documents - standards - etc. or from someone's perception or experience. Anomalies may be found during - but not limited to - reviewing - test






31. The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]






32. A procedure to derive and/or select test cases targeted at one or more defect categories - with tests being developed from what is known about the specific defect category. See also defect taxonomy. A system of (hierarchical) categories designed to b






33. The organizational artifacts needed to perform testing - consisting of test environments - test tools - office environment and procedures.






34. A tool that provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides other support for the developer - such as debugging capabilities. [Graham]






35. The capability of the software product to be installed in a specified environment [ISO 9126]. See also portability. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]






36. The number of independent paths through a program. Cyclomatic complexity is defined as: L - N + 2P - where - L = the number of edges/links in a graph - N = the number of nodes in a graph - P = the number of disconnected parts of the graph (e.g. a cal






37. The association of the definition of a variable with the use of that variable. Variable uses include computational (e.g. multiplication) or to direct the execution of a path ("predicate" use).






38. The testing of individual software components. [After IEEE 610]






39. The evaluation of a condition to True or False.






40. The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.






41. The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms - e.g. lines-of-code - number of classes or function points).






42. The representation of selected behavioral characteristics of one physical or abstract system by another system. [ISO 2382/1]






43. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






44. A test tool to perform automated test comparison of actual results with expected results.






45. A test is deemed to pass if its actual result matches its expected result.






46. Acronym for Commercial Off-The-Shelf software. See off-the-shelf software. A software product that is developed for the general market - i.e. for a large number of customers - and that is delivered to many customers in identical format.






47. A tool that provides objective measures of what structural elements - e.g. statements - branches have been exercised by a test suite.






48. The fundamental test process comprises test planning and control - test analysis and design - test implementation and execution - evaluating exit criteria and reporting - and test closure activities.






49. A document specifying a set of test cases (objective - inputs - test actions - expected results - and execution preconditions) for a test item. [After IEEE 829]






50. The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions. [IEEE 610] See also error-tolerance - fault-tolerance. The ability of a system or component to continue normal o