Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Black-box testing technique used to create groups of input conditions that create the same kind of output.






2. Conditions ensuring testing process is complete and the object being tested is ready for next stage.






3. Tests interfaces between components and between integrated components and systems.






4. Integrate different kinds of tools to make test management more efficient and simple.






5. Deviation of a software system from its expected delivery services or results






6. Metric used to calculate the number of combinations of all single condition outcomes within one statement that are executed by a test case.






7. A task of maintaining and controlling changes to all entities of a system.






8. Fixed - Won't Fix - Later - Remind - Duplicate - Incomplete - Not a Bug - Invalid etc.






9. Testing performed to detect defects in interfaces and interation between integrated components. Also called "integration testing in the small".






10. Uses risks to: ID test techniques Determine how much testing is required Prioritize tests with high-priority risks first






11. A functional testing approach in which test cases are designed based on business processes.






12. Check to make sure a system adheres to a defined set of standards conventions or regulations in laws and similar specifications.






13. Begin with initial requirements specification phase end with implementation and maintenance phases with cyclical transitions in between phases.






14. Human action that generates an incorrect result.






15. Linear Code Sequence and Jump.






16. Planning & Control - Analysis and Design - Implementation and Execution - Evaluating Exit - Criteria and Reporting - Closure






17. Behavior or response of a software application that you observe when you execute the action steps in the test case.






18. Scheduling Tests Manage test activities Provide interfaces to different tools provide traceability of tests Log test results Prepare progress reports






19. Testing performed to determine whether the system meets acceptance criteria






20. Testing an integrated system to validate it meets requirements






21. Execute individual & groups of test cases Record results Compare results with expected Report differenes between actual & expected Re-execute to verify fixes






22. Based on analysis of functional specifications of a system.






23. Enables testers to prove that functionality between two or more communicating systems or components is IAW requriements.






24. Bug fault internal error problem etc. Flaw in software that causes it to fail to perform its required functions.






25. Measure & analyze results of testing; Monitor document share results of testing; Report information on testing; Initiate actions to improve processes; Make decisions about testing






26. Tracing requirements for a level of testing using test documentation from the test plan to the test script.






27. A component of the incident report that determines the actual effect of the incident on the software and its users.






28. Measures amount of testing performed by a collection of test cases






29. Combining components or sytems into larger structural units or subsystems.






30. White-box design technique used to design test cases for a software component using LCSAJ.






31. A document that provides the structure for writing test cases.






32. The capability of a software product to provide agreed and correct output with the required degree of precision






33. Tool or hardware device that runs in parallel to assembled component. It manages records and analyzes the behavior of the tested system.






34. Ability of software to provide appropriate performance relative to amount of resources used.






35. Input or combination of inputs required to test software.






36. Simple & easy to follow Its rigidity makes it easy to follow It's typically well planned - Systematic - Freezing requirements before development begins ensures no rework later Each phase has specific deliverables






37. Ease with which software cna be modified to correct defects meet new requirements make future maintenance easier or adapt to a changed environment.






38. Incremental rollout Adapt processes testware etc. to fit with use of tool Adequate training Define guidelines for use of tool (from pilot project) Implement continuous improvement mechanism Monitor use of tool Implement ways to learn lessons






39. Testing performed based on the contract between a customer and the development organization. Customer uses results of the test to determine acceptance of software.






40. Sequence in which instructions are executed through a component or system






41. A code metric that specifies the number of independent paths through a program. Enables identification of complex (and therefore high-risk) areas of code.






42. Integration Approach: A frame or backbone is created and components are progressively integrated into it.






43. Conditions required to begin testing activities.






44. Nonfunctional testing including testing: ease of fixing defects - ease of meeting new requirements - ease of maintenance






45. Abilitiy of software to collaborate with one or more specified systems subsystem or components.






46. Used to test the functionality of software as mentioned in software requirement specifications.






47. A metric to calculate the number of SINGLE condition outcomes that can independently affect the decision outcome.






48. Not related to the actual functionality e.g. reliability efficiency usability maintainability portability etc.






49. A unique identifier for each incident report generated during test execution.






50. Tools used to keep track of different versions variants and releases of software and test artifacts (such as design documents test plans and test cases).