Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Special additions or changes to the environment required to run a test case.






2. Planning & Control - Analysis and Design - Implementation and Execution - Evaluating Exit - Criteria and Reporting - Closure






3. Waterfall iterative-incremental "V"






4. An event or item that can be tested using one or more test cases






5. Inputs - Expected Results - Actual Results - Anomalies - Date & Time - Procedure Step - Attempts to repeat - Testers - Observers






6. The capability of a software product to provide functions that address explicit and implicit requirements from the product against specified conditions.






7. Software products or applications designed to automate manual testing tasks.






8. ID SW products - components - risks - objectives; Estimate effort; Consider approach; Ensure adherence to organization policies; Determine team structure; Set up test environment; Schedule testing tasks & activities






9. Testing an integrated system to validate it meets requirements






10. Abilitiy of software to collaborate with one or more specified systems subsystem or components.






11. The capability of a software product to provide agreed and correct output with the required degree of precision






12. One defect prevents the detection of another.






13. Measure & analyze results of testing; Monitor document share results of testing; Report information on testing; Initiate actions to improve processes; Make decisions about testing






14. Test case design technique used to identify bugs occurring on or around boundaries of equivalence partitions.






15. Special-purpose software used to simulate a component called by the component under test






16. A metric used to calculate the number of ALL condition or sub-expression outcomes in code that are executed by a test suite.






17. Based on analysis of functional specifications of a system.






18. A functional testing approach in which test cases are designed based on business processes.






19. Deviation of a software system from its expected delivery services or results






20. Testing performed at development organization's site but outside organization. (I.e. testing is performed by potential customers users or independent testing team)






21. A type of review that involves visual examination of documents to detect defects such as violations of development standards and non-conformance to higher-level documentation.






22. Severity - Priority






23. Tools used to provide support for and automation of managing various testing documents such as test policy test strategy and test plan






24. Specific groups that represent a set of valid or invalid partitions for input conditions.






25. Used to test the functionality of software as mentioned in software requirement specifications.






26. Testing performed to detect defects in interfaces and interation between integrated components. Also called "integration testing in the small".






27. Fixed - Won't Fix - Later - Remind - Duplicate - Incomplete - Not a Bug - Invalid etc.






28. Begin with initial requirements specification phase end with implementation and maintenance phases with cyclical transitions in between phases.






29. A code metric that specifies the number of independent paths through a program. Enables identification of complex (and therefore high-risk) areas of code.






30. Develop & proiroitize test cases Create groups of test cases Set up test environment






31. Human action that generates an incorrect result.






32. Unconfirmed - New - Open - Assigned - Resolved - Verified - Closed






33. Tools used to store and manage incidents return phone defects failures or anomalies.






34. Incremental rollout Adapt processes testware etc. to fit with use of tool Adequate training Define guidelines for use of tool (from pilot project) Implement continuous improvement mechanism Monitor use of tool Implement ways to learn lessons






35. Separation of testing responsibilities which encourages the accomplishment of objective testing






36. Scheduling Tests Manage test activities Provide interfaces to different tools provide traceability of tests Log test results Prepare progress reports






37. Components or subsystems are integrated and tested one or some at a time until all the components are subsystems are integrated and tested.






38. White-box design technique used to design test cases for a software component using LCSAJ.






39. A unique identifier for each incident report generated during test execution.






40. Testing software components that are separately testable. Also module program and unit testing.






41. Input or combination of inputs required to test software.






42. Not related to the actual functionality e.g. reliability efficiency usability maintainability portability etc.






43. An analysis that determines the portion of code on software executed by a set of test cases






44. Insertion of additional code in the existing program in order to count coverage items.






45. Assessment of changes required to different layers of documentation and software to implement a given change to the original requirements.






46. Operational testing performed at an _external_ site without involvement of the developing organization.






47. Component - Integration - System - Acceptance






48. Conditions ensuring testing process is complete and the object being tested is ready for next stage.






49. A metric to calculate the number of SINGLE condition outcomes that can independently affect the decision outcome.






50. Ability of software to provide appropriate performance relative to amount of resources used.