Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Enables testers to prove that functionality between two or more communicating systems or components is IAW requriements.






2. A task of maintaining and controlling changes to all entities of a system.






3. Special additions or changes to the environment required to run a test case.






4. Behavior or response of a software application that you observe when you execute the action steps in the test case.






5. White-box design technique used to design test cases for a software component using LCSAJ.






6. Ease with which software cna be modified to correct defects meet new requirements make future maintenance easier or adapt to a changed environment.






7. Black-box test design technique - test cases are designed from a decision table.






8. Actual inputs required to execute a test case






9. Incremental rollout Adapt processes testware etc. to fit with use of tool Adequate training Define guidelines for use of tool (from pilot project) Implement continuous improvement mechanism Monitor use of tool Implement ways to learn lessons






10. All possible combinations of input values and preconditions are tested.






11. Tool or hardware device that runs in parallel to assembled component. It manages records and analyzes the behavior of the tested system.






12. Develop & proiroitize test cases Create groups of test cases Set up test environment






13. Tools used to keep track of different versions variants and releases of software and test artifacts (such as design documents test plans and test cases).






14. Components are combined and tested in the order in which basic functionalities start working






15. Uses risks to: ID test techniques Determine how much testing is required Prioritize tests with high-priority risks first






16. Based on the generic iterative-incremental model. Teams work by dividing project tasks into small increments involving only short-term planning to implement various iterations






17. Events that occurred during the testing process our investigation.






18. Black-box testing technique used to create groups of input conditions that create the same kind of output.






19. Find defects in code while the software application being tested is running.






20. A test case design technique for a software component to ensure that the outcome of a decision point or branch in cod is tested.






21. Integration Approach: A frame or backbone is created and components are progressively integrated into it.






22. Integrate different kinds of tools to make test management more efficient and simple.






23. Inputs - Expected Results - Actual Results - Anomalies - Date & Time - Procedure Step - Attempts to repeat - Testers - Observers






24. The capability of a software product to provide agreed and correct output with the required degree of precision






25. Component - Integration - System - Acceptance






26. Tools used to identify and calculate coverage items in program code.






27. One defect prevents the detection of another.






28. Record details of test cases executed Record order of execution record results






29. Used to test the functionality of software as mentioned in software requirement specifications.






30. Abilitiy of software to collaborate with one or more specified systems subsystem or components.






31. Tools used to provide support for and automation of managing various testing documents such as test policy test strategy and test plan






32. A document that records the description of each event that occurs during the testing process and that requires further investigation






33. Human action that generates an incorrect result.






34. Requirements that determine the functionality of a software system.






35. Extract data from existing databases to be used during execution of tests make data anonymous generate new records populated with random data sorting records constructing a large number of similar records from a template






36. Special-purpose software used to simulate a component called by the component under test






37. A black-box test design technique used to identify possible causes of a problem by using the cause-effect diagram






38. A metric used to calculate the number of ALL condition or sub-expression outcomes in code that are executed by a test suite.






39. Testing performed to detect defects in interfaces and interation between integrated components. Also called "integration testing in the small".






40. ID SW products - components - risks - objectives; Estimate effort; Consider approach; Ensure adherence to organization policies; Determine team structure; Set up test environment; Schedule testing tasks & activities






41. Testing performed based on the contract between a customer and the development organization. Customer uses results of the test to determine acceptance of software.






42. Scheduling Tests Manage test activities Provide interfaces to different tools provide traceability of tests Log test results Prepare progress reports






43. Test case design technique used to identify bugs occurring on or around boundaries of equivalence partitions.






44. Process used to create a SW product from initial conception to public release






45. Insertion of additional code in the existing program in order to count coverage items.






46. Severity - Priority






47. Testing performed at development organization's site but outside organization. (I.e. testing is performed by potential customers users or independent testing team)






48. Frequency of tests failing per unit of measure (e.g. time number of transactions test cases executed.)






49. A component of the incident report that determines the actual effect of the incident on the software and its users.






50. Ability of software to provide appropriate performance relative to amount of resources used.