Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A component of the incident report that determines the actual effect of the incident on the software and its users.






2. Test case design technique used to identify bugs occurring on or around boundaries of equivalence partitions.






3. Specific groups that represent a set of valid or invalid partitions for input conditions.






4. Conditions required to begin testing activities.






5. Testing performed to determine whether the system meets acceptance criteria






6. One defect prevents the detection of another.






7. Black-box test design technique - test cases are designed from a decision table.






8. Linear Code Sequence and Jump.






9. Testing software components that are separately testable. Also module program and unit testing.






10. Testing performed at development organization's site but outside organization. (I.e. testing is performed by potential customers users or independent testing team)






11. Fixed - Won't Fix - Later - Remind - Duplicate - Incomplete - Not a Bug - Invalid etc.






12. Incremental rollout Adapt processes testware etc. to fit with use of tool Adequate training Define guidelines for use of tool (from pilot project) Implement continuous improvement mechanism Monitor use of tool Implement ways to learn lessons






13. Sequence in which instructions are executed through a component or system






14. Uses risks to: ID test techniques Determine how much testing is required Prioritize tests with high-priority risks first






15. Occurrences that happen before and after an unexpected event






16. Tests functional or nonfunctional attributes of a system or its components but without referring to the internal structure of the system or its components






17. Black-box testing technique used to create groups of input conditions that create the same kind of output.






18. Events that occurred during the testing process our investigation.






19. A task of maintaining and controlling changes to all entities of a system.






20. Integrate different kinds of tools to make test management more efficient and simple.






21. Human action that generates an incorrect result.






22. A black-box test design technique used to identify possible causes of a problem by using the cause-effect diagram






23. An analysis that determines the portion of code on software executed by a set of test cases






24. A metric to calculate the number of SINGLE condition outcomes that can independently affect the decision outcome.






25. Check to make sure a system adheres to a defined set of standards conventions or regulations in laws and similar specifications.






26. Tools used to keep track of different versions variants and releases of software and test artifacts (such as design documents test plans and test cases).






27. Tools used by developers to identify defects in programs.






28. Execute individual & groups of test cases Record results Compare results with expected Report differenes between actual & expected Re-execute to verify fixes






29. Incident Report - Identifier - Summary - Incident - Description - Impact






30. A set of conditions that a system needs to meet in order to be accepted by end users






31. The ratio between the number of defects found and the size of the component/system tested.






32. A functional testing approach in which test cases are designed based on business processes.






33. Special additions or changes to the environment required to run a test case.






34. Review documents (reqs architecture design etc.) ID conditions to be tested Design tests Assess testability of reqs ID infrastructure & tools






35. Enables testers to prove that functionality between two or more communicating systems or components is IAW requriements.






36. Not related to the actual functionality e.g. reliability efficiency usability maintainability portability etc.






37. Examine changes made to an operational system cause defects.






38. Testing an integrated system to validate it meets requirements






39. Components at lowest level are tested first with higher-level components simulated by drivers. Tested components are then used to test higher-level components. Repeat until all levels have been tested.






40. Requirements Analysis - Design - Coding - Integration - Implementation - Maintenance






41. Tool or hardware device that runs in parallel to assembled component. It manages records and analyzes the behavior of the tested system.






42. Special-purpose software used to simulate a component called by the component under test






43. Insertion of additional code in the existing program in order to count coverage items.






44. The capability of a software product to provide functions that address explicit and implicit requirements from the product against specified conditions.






45. A metric used to calculate the number of ALL condition or sub-expression outcomes in code that are executed by a test suite.






46. Ease with which software cna be modified to correct defects meet new requirements make future maintenance easier or adapt to a changed environment.






47. Components or subsystems are integrated and tested one or some at a time until all the components are subsystems are integrated and tested.






48. Software products or applications designed to automate manual testing tasks.






49. Conditions ensuring testing process is complete and the object being tested is ready for next stage.






50. Separation of testing responsibilities which encourages the accomplishment of objective testing