Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A development life cycle where a project is broken into a usually large number of iterations. An iteration is a complete development loop resulting in a release (internal or external) of an executable product - a subset of the final product under dev






2. A software development approach whereby lines of code (production and/or test) of a component are written by two programmers sitting at a single computer. This implicitly means ongoing real-tim code reviews are performed.






3. A grid showing the resulting transitions for each state combined with each possible event - showing both valid and invalid transitions.






4. A group of test activities aimed at testing a component or system focused on a specific test objective - i.e. functional test - usability test - regression test etc. A test type may take place on one or more test levels or test phases. [After TMap]






5. The degree - expressed as a percentage - to which a specified coverage item has been exercised by a test suite.






6. A series which appears to be random but is in fact generated according to some prearranged sequence.






7. A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an input document for the test process. [After TMap]






8. A type of test tool that enables data to be selected from existing databases or created - generated - manipulated and edited for use in testing.






9. Testing using input values that should be rejected by the component or system. See also error tolerance. The ability of a system or component to continue normal operation despite the presence of erroneous inputs. [After IEEE 610].






10. A test result in which a defect is reported although no such defect actually exists in the test object.






11. Testing that involves the execution of the software of a component or system.






12. The data received from an external source by the test object during test execution. The external source can be hardware - software or human.






13. A model that shows the growth in reliability over time during continuous testing of a component or system as a result of the removal of defects that result in reliability failures.






14. Testing to determine the scalability of the software product.






15. The individual element to be tested. There usually is one test object and many test items. See also test object. A reason or purpose for designing and executing a test.






16. A test is deemed to pass if its actual result matches its expected result.






17. A diagram that depicts the states that a component or system can assume - and shows the events or circumstances that cause and/or result from a change from one state to another. [IEEE 610]






18. A logical expression that can be evaluated as True or False - e.g. A>B. See also test condition. An item or event of a component or system that could be verified by one or more test cases - e.g. a function - transaction - feature - quality attribute






19. A tool that carries out static analysis.






20. Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard to quality generally includes the establishment of the quality policy and quality objectives - quality planning - quality control -






21. The activity of establishing or updating a test plan.






22. A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function. [After IEEE 829]






23. An incremental approach to integration testing where the lowest level components are tested first - and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested






24. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues






25. Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.






26. An executable statement where a variable is assigned a value.






27. A specification of the activity which a component or system being tested may experience in production. A load profile consists of a designated number of virtual users who process a defined set of transactions in a specified time period and according






28. Two or more single conditions joined by means of a logical operator (AND - OR or XOR) - e.g. 'A>B AND C>1000'.






29. A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement).






30. A point in time in a project at which defined (intermediate) deliverables and results should be ready.






31. The set of generic and specific conditions for permitting a process to go forward with a defined task - e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort n






32. A risk related to management and control of the (test) project - e.g. lack of staffing - strict deadlines - changing requirements - etc. See also risk. A factor that could result in future negative consequences; usually expressed as impact and likeli






33. The set from which valid output values can be selected. See also domain. The set from which valid input and/or output values can be selected.






34. A capability maturity model structure wherein capability levels provide a recommended order for approaching process improvement within specified process areas. [CMMI]






35. An aggregation of hardware - software or both - that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]






36. A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test - integration test - system test and acceptance test. [After TMap]






37. Testing the changes to an operational system or the impact of a changed environment to an operational system.






38. The response of a component or system to a set of input values and preconditions.






39. The capability of the software product to adhere to standards - conventions or regulations in laws and similar prescriptions. [ISO 9126]






40. Software developed specifically for a set of users or customers. The opposite is off-the-shelf software.






41. A five level staged framework for test process improvement - related to the Capability Maturity Model (CMM) - that describes the key elements of an effective test process.






42. A meeting at the end of a project during which the project team members evaluate the project and learn lessons that can be applied to the next project.






43. Testing where components or systems are integrated and tested one or some at a time - until all the components or systems are integrated and tested.






44. Systematic application of procedures and practices to the tasks of identifying - analyzing - prioritizing - and controlling risk.






45. The tracing of requirements for a test level through the layers of test documentation (e.g. test plan - test design specification - test case specification and test procedure specification or test script).






46. A skeletal or special-purpose implementation of a software component - used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component. [After IEEE 610]






47. The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.






48. An entity in a programming language - which is typically the smallest indivisible unit of execution.






49. A white box test design technique in which test cases are designed to execute definition and use pairs of variables.






50. Analysis of source code carried out without execution of that software.