Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A unique identifier for each incident report generated during test execution.






2. Measures amount of testing performed by a collection of test cases






3. Waterfall iterative-incremental "V"






4. Begin with initial requirements specification phase end with implementation and maintenance phases with cyclical transitions in between phases.






5. Incident Report - Identifier - Summary - Incident - Description - Impact






6. Testing an integrated system to validate it meets requirements






7. Uses risks to: ID test techniques Determine how much testing is required Prioritize tests with high-priority risks first






8. Integration approach components or subsystems are combined all at once rather than in stages.






9. Software products or applications designed to automate manual testing tasks.






10. Integrate different kinds of tools to make test management more efficient and simple.






11. Process used to create a SW product from initial conception to public release






12. Simple & easy to follow Its rigidity makes it easy to follow It's typically well planned - Systematic - Freezing requirements before development begins ensures no rework later Each phase has specific deliverables






13. A review not based on a formal documented procedure






14. Components or subsystems are integrated and tested one or some at a time until all the components are subsystems are integrated and tested.






15. Execute individual & groups of test cases Record results Compare results with expected Report differenes between actual & expected Re-execute to verify fixes






16. Extract data from existing databases to be used during execution of tests make data anonymous generate new records populated with random data sorting records constructing a large number of similar records from a template






17. A table showing combinations of inputs and their associated actions.






18. Tools used to provide support for and automation of managing various testing documents such as test policy test strategy and test plan






19. Frequency of tests failing per unit of measure (e.g. time number of transactions test cases executed.)






20. Combining components or sytems into larger structural units or subsystems.






21. Occurrences that happen before and after an unexpected event






22. Tests interfaces between components and between integrated components and systems.






23. Measure & analyze results of testing; Monitor document share results of testing; Report information on testing; Initiate actions to improve processes; Make decisions about testing






24. The smallest software item that can be tested in isolation.






25. Tests functional or nonfunctional attributes of a system or its components but without referring to the internal structure of the system or its components






26. All possible combinations of input values and preconditions are tested.






27. Record details of test cases executed Record order of execution record results






28. Black-box testing technique used to create groups of input conditions that create the same kind of output.






29. Abilitiy of software to collaborate with one or more specified systems subsystem or components.






30. Tools used to store and manage incidents return phone defects failures or anomalies.






31. Behavior or response of a software application that you observe when you execute the action steps in the test case.






32. Testing performed to detect defects in interfaces and interation between integrated components. Also called "integration testing in the small".






33. Specific groups that represent a set of valid or invalid partitions for input conditions.






34. The capability of a software product to provide agreed and correct output with the required degree of precision






35. Special additions or changes to the environment required to run a test case.






36. A metric used to calculate the number of ALL condition or sub-expression outcomes in code that are executed by a test suite.






37. Testing performed to determine whether the system meets acceptance criteria






38. Testing software components that are separately testable. Also module program and unit testing.






39. Requirements Analysis - Design - Coding - Integration - Implementation - Maintenance






40. Tools used by developers to identify defects in programs.






41. Commercial Off-The-Shelf products. Products developed for the general market as opposed to those developed for a specific customer.






42. Tool or hardware device that runs in parallel to assembled component. It manages records and analyzes the behavior of the tested system.






43. Informal testing technique in which test planning and execution run in parallel






44. Enables testers to prove that functionality between two or more communicating systems or components is IAW requriements.






45. Events that occurred during the testing process our investigation.






46. A set of conditions that a system needs to meet in order to be accepted by end users






47. Special-purpose software used to simulate a component that calls the component under test






48. Ability of software to provide appropriate performance relative to amount of resources used.






49. Input or combination of inputs required to test software.






50. Testing software in its operational environment