Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Insertion of additional code in the existing program in order to count coverage items.






2. Commercial Off-The-Shelf products. Products developed for the general market as opposed to those developed for a specific customer.






3. Not related to the actual functionality e.g. reliability efficiency usability maintainability portability etc.






4. Inputs - Expected Results - Actual Results - Anomalies - Date & Time - Procedure Step - Attempts to repeat - Testers - Observers






5. A task of maintaining and controlling changes to all entities of a system.






6. All possible combinations of input values and preconditions are tested.






7. Operational testing performed at an _external_ site without involvement of the developing organization.






8. Black-box testing technique used to create groups of input conditions that create the same kind of output.






9. Requirements that determine the functionality of a software system.






10. Assessment of changes required to different layers of documentation and software to implement a given change to the original requirements.






11. Process used to create a SW product from initial conception to public release






12. A type of review that involves visual examination of documents to detect defects such as violations of development standards and non-conformance to higher-level documentation.






13. Response of the application to an input






14. Software products or applications designed to automate manual testing tasks.






15. Special-purpose software used to simulate a component called by the component under test






16. Frequency of tests failing per unit of measure (e.g. time number of transactions test cases executed.)






17. A black-box test design technique used to identify possible causes of a problem by using the cause-effect diagram






18. Severity - Priority






19. Testing performed at development organization's site but outside organization. (I.e. testing is performed by potential customers users or independent testing team)






20. Actual inputs required to execute a test case






21. Ability of software to provide appropriate performance relative to amount of resources used.






22. Tools used to keep track of different versions variants and releases of software and test artifacts (such as design documents test plans and test cases).






23. Tracing requirements for a level of testing using test documentation from the test plan to the test script.






24. Components at lowest level are tested first with higher-level components simulated by drivers. Tested components are then used to test higher-level components. Repeat until all levels have been tested.






25. Conditions required to begin testing activities.






26. Deviation of a software system from its expected delivery services or results






27. Allows storage of test input and expected results in one or more central data sources or databases.






28. Combining components or sytems into larger structural units or subsystems.






29. Based on analysis of functional specifications of a system.






30. Tests interfaces between components and between integrated components and systems.






31. Record details of test cases executed Record order of execution record results






32. Bug fault internal error problem etc. Flaw in software that causes it to fail to perform its required functions.






33. A table showing combinations of inputs and their associated actions.






34. Enables testers to prove that functionality between two or more communicating systems or components is IAW requriements.






35. The capability of a software product to provide agreed and correct output with the required degree of precision






36. The ratio between the number of defects found and the size of the component/system tested.






37. Conditions ensuring testing process is complete and the object being tested is ready for next stage.






38. Scripting technique that uses data files to store test input expected results and keywords related to a software application being tested.






39. Tools used by developers to identify defects in programs.






40. A set of conditions that a system needs to meet in order to be accepted by end users






41. Waterfall iterative-incremental "V"






42. Ad hoc method of exposing bugs based on past knowledge and experience of experts (e.g. empty strings illegal characters empty files etc.).






43. A document that provides the structure for writing test cases.






44. Unconfirmed - New - Open - Assigned - Resolved - Verified - Closed






45. A component of the incident report that determines the actual effect of the incident on the software and its users.






46. Events that occurred during the testing process our investigation.






47. Components are integrated in the order in which they are developed






48. A metric to calculate the number of SINGLE condition outcomes that can independently affect the decision outcome.






49. Simple & easy to follow Its rigidity makes it easy to follow It's typically well planned - Systematic - Freezing requirements before development begins ensures no rework later Each phase has specific deliverables






50. Special-purpose software used to simulate a component that calls the component under test