Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Components at lowest level are tested first with higher-level components simulated by drivers. Tested components are then used to test higher-level components. Repeat until all levels have been tested.






2. Response of the application to an input






3. A document that records the description of each event that occurs during the testing process and that requires further investigation






4. Actual inputs required to execute a test case






5. Sequence in which instructions are executed through a component or system






6. The smallest software item that can be tested in isolation.






7. Enables testers to prove that functionality between two or more communicating systems or components is IAW requriements.






8. Operational testing performed at an _external_ site without involvement of the developing organization.






9. Conditions required to begin testing activities.






10. Integrate different kinds of tools to make test management more efficient and simple.






11. Begin with initial requirements specification phase end with implementation and maintenance phases with cyclical transitions in between phases.






12. A table showing combinations of inputs and their associated actions.






13. Behavior or response of a software application that you observe when you execute the action steps in the test case.






14. Tools used to keep track of different versions variants and releases of software and test artifacts (such as design documents test plans and test cases).






15. Waterfall iterative-incremental "V"






16. Record details of test cases executed Record order of execution record results






17. Ability of software to provide appropriate performance relative to amount of resources used.






18. A unique identifier for each incident report generated during test execution.






19. Develop & proiroitize test cases Create groups of test cases Set up test environment






20. Incremental rollout Adapt processes testware etc. to fit with use of tool Adequate training Define guidelines for use of tool (from pilot project) Implement continuous improvement mechanism Monitor use of tool Implement ways to learn lessons






21. Linear Code Sequence and Jump.






22. Fixed - Won't Fix - Later - Remind - Duplicate - Incomplete - Not a Bug - Invalid etc.






23. Informal testing technique in which test planning and execution run in parallel






24. Used to replace a component that calls another component.






25. Uses risks to: ID test techniques Determine how much testing is required Prioritize tests with high-priority risks first






26. A metric to calculate the number of SINGLE condition outcomes that can independently affect the decision outcome.






27. Execute individual & groups of test cases Record results Compare results with expected Report differenes between actual & expected Re-execute to verify fixes






28. Allows storage of test input and expected results in one or more central data sources or databases.






29. Testing software components that are separately testable. Also module program and unit testing.






30. An event or item that can be tested using one or more test cases






31. An analysis that determines the portion of code on software executed by a set of test cases






32. The capability of a software product to provide functions that address explicit and implicit requirements from the product against specified conditions.






33. Planning & Control - Analysis and Design - Implementation and Execution - Evaluating Exit - Criteria and Reporting - Closure






34. Tools used to store and manage incidents return phone defects failures or anomalies.






35. A code metric that specifies the number of independent paths through a program. Enables identification of complex (and therefore high-risk) areas of code.






36. Ad hoc method of exposing bugs based on past knowledge and experience of experts (e.g. empty strings illegal characters empty files etc.).






37. One defect prevents the detection of another.






38. Increased load (transations) used to test behavior of system under high volume.






39. Testing an integrated system to validate it meets requirements






40. Commercial Off-The-Shelf products. Products developed for the general market as opposed to those developed for a specific customer.






41. A document that provides the structure for writing test cases.






42. Nonfunctional testing including testing: ease of fixing defects - ease of meeting new requirements - ease of maintenance






43. Input or combination of inputs required to test software.






44. The capability of a software product to provide agreed and correct output with the required degree of precision






45. A test case design technique for a software component to ensure that the outcome of a decision point or branch in cod is tested.






46. Special-purpose software used to simulate a component that calls the component under test






47. White-box design technique used to design test cases for a software component using LCSAJ.






48. Testing performed at development organization's site but outside organization. (I.e. testing is performed by potential customers users or independent testing team)






49. All possible combinations of input values and preconditions are tested.






50. Process used to create a SW product from initial conception to public release