Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Tools used to keep track of different versions variants and releases of software and test artifacts (such as design documents test plans and test cases).






2. A unique identifier for each incident report generated during test execution.






3. Ease with which software cna be modified to correct defects meet new requirements make future maintenance easier or adapt to a changed environment.






4. An analysis that determines the portion of code on software executed by a set of test cases






5. Scheduling Tests Manage test activities Provide interfaces to different tools provide traceability of tests Log test results Prepare progress reports






6. Tools used to store and manage incidents return phone defects failures or anomalies.






7. Process used to create a SW product from initial conception to public release






8. Execute individual & groups of test cases Record results Compare results with expected Report differenes between actual & expected Re-execute to verify fixes






9. Conditions ensuring testing process is complete and the object being tested is ready for next stage.






10. Tracing requirements for a level of testing using test documentation from the test plan to the test script.






11. Uses risks to: ID test techniques Determine how much testing is required Prioritize tests with high-priority risks first






12. Incremental rollout Adapt processes testware etc. to fit with use of tool Adequate training Define guidelines for use of tool (from pilot project) Implement continuous improvement mechanism Monitor use of tool Implement ways to learn lessons






13. The capability of a software product to provide agreed and correct output with the required degree of precision






14. Input or combination of inputs required to test software.






15. Separation of testing responsibilities which encourages the accomplishment of objective testing






16. A test case design technique for a software component to ensure that the outcome of a decision point or branch in cod is tested.






17. Based on the generic iterative-incremental model. Teams work by dividing project tasks into small increments involving only short-term planning to implement various iterations






18. Deviation of a software system from its expected delivery services or results






19. White-box design technique used to design test cases for a software component using LCSAJ.






20. Testing software in its operational environment






21. Testing performed at development organization's site but outside organization. (I.e. testing is performed by potential customers users or independent testing team)






22. Linear Code Sequence and Jump.






23. Increased load (transations) used to test behavior of system under high volume.






24. Software products or applications designed to automate manual testing tasks.






25. Special-purpose software used to simulate a component that calls the component under test






26. Requirements that determine the functionality of a software system.






27. Insertion of additional code in the existing program in order to count coverage items.






28. Bug fault internal error problem etc. Flaw in software that causes it to fail to perform its required functions.






29. Human action that generates an incorrect result.






30. Severity - Priority






31. The capability of a software product to provide functions that address explicit and implicit requirements from the product against specified conditions.






32. Record details of test cases executed Record order of execution record results






33. Testing an integrated system to validate it meets requirements






34. The process of finding analyzing and removing causes of failure in a software product.






35. Integration approach components or subsystems are combined all at once rather than in stages.






36. Review documents (reqs architecture design etc.) ID conditions to be tested Design tests Assess testability of reqs ID infrastructure & tools






37. Tools used by developers to identify defects in programs.






38. One defect prevents the detection of another.






39. Measure & analyze results of testing; Monitor document share results of testing; Report information on testing; Initiate actions to improve processes; Make decisions about testing






40. Special additions or changes to the environment required to run a test case.






41. Waterfall iterative-incremental "V"






42. Component - Integration - System - Acceptance






43. Informal testing technique in which test planning and execution run in parallel






44. Testing performed based on the contract between a customer and the development organization. Customer uses results of the test to determine acceptance of software.






45. Sequence in which instructions are executed through a component or system






46. Allows storage of test input and expected results in one or more central data sources or databases.






47. Used to replace a component that calls another component.






48. An event or item that can be tested using one or more test cases






49. Based on analysis of functional specifications of a system.






50. Integration Approach: A frame or backbone is created and components are progressively integrated into it.