Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. An input for which the specification predicts a result.






2. Acronym for Computer Aided Software Engineering.






3. The ability to identify related items in documentation and software - such as requirements with associated tests. See also horizontal traceability - vertical traceability. The tracing of requirements for a test level through the layers of test docume






4. A black box test design technique in which test cases are designed to execute user scenarios.






5. The capability of the software product to achieve acceptable levels of risk of harm to people - business - software - property or the environment in a specified context of use. [ISO 9126]






6. A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test - integration test - system test and acceptance test. [After TMap]






7. An abstract representation of the sequence and possible changes of the state of data objects - where the state of an object is any of: creation - usage - or destruction. [Beizer]






8. A tree showing equivalence parititions hierarchically ordered - which is used to design test cases in the classification tree method. See also classification tree method. A black box test design technique in which test cases - described by means of a






9. The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]






10. A statement which - when compiled - is translated into object code - and which will be executed procedurally when the program is running and may perform an action on data.






11. The process of testing to determine the functionality of a software product.






12. The process of testing to determine the portability of a software product.






13. The process of evaluating behavior - e.g. memory performance - CPU usage - of a system or component during execution. [After IEEE 610]






14. A software product that supports one or more test activities - such as planning and control - specification - building initial files and data - test execution and test analysis. [TMap] See also CAST. Acronym for Computer Aided Software Testing.






15. A type of peer review that relies on visual examination of documents to detect defects - e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a docu






16. A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.






17. A tool that carries out static analysis.






18. A document that specifies - ideally in a complete - precise and verifiable manner - the requirements - design - behavior - or other characteristics of a component or system - and - often - the procedures for determining whether these provisions have






19. A black box test design technique in which test cases are designed from cause-effect graphs. [BS 7925/2]






20. A tool used by programmers to reproduce failures - investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step - to halt a program at any program statement and to set and examine






21. The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface. [ISO 9126] See also reliability - robustness. The ability of the software product






22. A set of input values - execution preconditions - expected results and execution postconditions - developed for a particular objective or test condition - such as to exercise a particular program path or to verify compliance with a specific requireme






23. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






24. The process of testing to determine the interoperability of a software product.






25. A five level staged framework for test process improvement - related to the Capability Maturity Model (CMM) - that describes the key elements of an effective test process.






26. A black box test design technique where test cases are selected - possibly using a pseudo-random generation algorithm - to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performa






27. Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.






28. The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO






29. The calculated approximation of a result (e.g. effort spent - completion date - costs involved - number of test cases - etc.) which is usable even if input data may be incomplete - uncertain - or noisy.






30. A type of integration testing in which software elements - hardware elements - or both are combined all at once into a component or an overall system - rather than in stages. [After IEEE 610] See also integration testing. Testing performed to expose






31. The percentage of sequences of N+1 transitions that have been exercised by a test suite. [Chow]






32. The degree to which a component - system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]






33. The testing of individual software components. [After IEEE 610]






34. The evaluation of a condition to True or False.






35. An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.






36. Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]






37. The process of recognizing - investigating - taking action and disposing of defects. It involves recording defects - classifying them and identifying the impact. [After IEEE 1044]






38. The ratio of the number of failures of a given category to a given unit of measure - e.g. failures per unit of time - failures per number of transactions - failures per number of computer runs. [IEEE 610]






39. A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark) - a user-manual - or an individual's specialized knowledge - but should not be the code. [Afte






40. Any (work) product that must be delivered to someone other than the (work) product's author.






41. An attribute of a component or system specified or implied by requirements documentation (for example reliability - usability or design constraints). [After IEEE 1008]






42. A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management. The planning - estimating - monitoring and co






43. A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-tim code reviews are performed.






44. The process of testing to determine the maintainability of a software product.






45. Method aiming to measure the size of the functionality of an information system. The measurement is independent of the technology. This measurement may be used as a basis for the measurement of productivity - the estimation of the needed resources -






46. A tool that supports the recording of requirements - requirements attributes (e.g. priority - knowledge responsible) and annotation - and facilitates traceability through layers of requirements and requirements change management. Some requirements ma






47. A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence. See also Failure Mode - Effect and Criticality Analysis (FMECA).






48. A logical expression that can be evaluated as True or False - e.g. A>B. See also test condition. An item or event of a component or system that could be verified by one or more test cases - e.g. a function - transaction - feature - quality attribute






49. The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]






50. A set of interrelated activities - which transform inputs into outputs. [ISO 12207]