Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Two or more single conditions joined by means of a logical operator (AND - OR or XOR) - e.g. 'A>B AND C>1000'.






2. Definition of user profiles in performance - load and/or stress testing. Profiles should reflect anticipated or actual usage based on an operational profile of a component or system - and hence the expected workload. See also load profile - operation






3. A way of developing software where the test cases are developed - and often automated - before the software is developed to run those test cases.






4. A scripting technique that uses data files to contain not only test data and expected results - but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control scrip






5. The process of recognizing - investigating - taking action and disposing of incidents. It involves logging incidents - classifying them and identifying the impact. [After IEEE 1044]






6. Testware used in automated testing - such as tool scripts.






7. A program of activities designed to improve the performance and maturity of the organization's processes - and the result of such a program. [CMMI]






8. The process of developing and prioritizing test procedures - creating test data and - optionally - preparing test harnesses and writing automated test scripts.






9. The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.






10. Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.






11. An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).






12. The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out - starting points regarding the test process - the test design






13. A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]






14. The process of testing to determine the recoverability of a software product. See also reliability testing. The process of testing to determine the reliability of a software product.






15. Two persons - e.g. two testers - a developer and a tester - or an end-user and a tester - working together to find defects. Typically - they share one computer and trade control of it while testing.






16. A special instance of a smoke test to decide if the component or system is ready for detailed and further testing. An intake test is typically carried out at the start of the test execution phase. See also smoke test. A subset of all defined/planned






17. The set from which valid output values can be selected. See also domain. The set from which valid input and/or output values can be selected.






18. Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. [ISO 9000]






19. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues






20. A set of exit criteria.






21. Testing to determine the robustness of the software product.






22. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black-box test design technique Procedure to derive and/or select test cases based on an analysis of the specification - either fu






23. The process of demonstrating the ability to fulfill specified requirements. Note the term 'qualified' is used to designate the corresponding status. [ISO 9000]






24. A tool that provides support to the review process. Typical features include review planning and tracking support - communication support - collaborative reviews and a repository for collecting and reporting of metrics.






25. The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently test cases) and execution of tests to determine whether the requirements have been met. [After IEEE 610]






26. A minimal software item that can be tested in isolation.






27. The process of testing to determine the resource-utilization of a software product. See also efficiency testing. The process of testing to determine the efficiency of a software product.






28. The process of testing to determine the portability of a software product.






29. A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).






30. A sequence of executable statements within a component.






31. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






32. Attributes of software products that bear on its ability to prevent unauthorized access - whether accidental or deliberate - to programs and data. [ISO 9126] See also functionality. The capability of the software product to provide functions which me






33. A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence. See also Failure Mode - Effect and Criticality Analysis (FMECA).






34. The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.






35. A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain.






36. A set of input values - execution preconditions - expected results and execution postconditions - developed for a particular objective or test condition - such as to exercise a particular program path or to verify compliance with a specific requireme






37. Acronym for Commercial Off-The-Shelf software. See off-the-shelf software. A software product that is developed for the general market - i.e. for a large number of customers - and that is delivered to many customers in identical format.






38. A white box test design technique in which test cases are designed to execute branches.






39. Testing the changes to an operational system or the impact of a changed environment to an operational system.






40. A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item - control changes to those characteristics - record and report change processi






41. A statement which - when compiled - is translated into object code - and which will be executed procedurally when the program is running and may perform an action on data.






42. A chronological record of relevant details about the execution of tests. [IEEE 829]






43. The percentage of equivalence partitions that have been exercised by a test suite.






44. The capability of the software product to achieve acceptable levels of risk of harm to people - business - software - property or the environment in a specified context of use. [ISO 9126]






45. A document specifying the test conditions (coverage items) for a test item - the detailed test approach and identifying the associated high level test cases. [After IEEE 829]






46. An attribute of a test indicating whether the same results are produced each time the test is executed.






47. Commonly used to refer to a test procedure specification - especially an automated one.






48. A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).






49. A pointer that references a location that is out of scope for that pointer or that does not exist. See also pointer. A data item that specifies the location of another data item; for example - a data item that specifies the address of the next employ






50. A form of static analysis based on the definition and usage of variables.