Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A test case design technique for a software component to ensure that the outcome of a decision point or branch in cod is tested.






2. Linear Code Sequence and Jump.






3. A code metric that specifies the number of independent paths through a program. Enables identification of complex (and therefore high-risk) areas of code.






4. An analysis that determines the portion of code on software executed by a set of test cases






5. Used to replace a component that calls another component.






6. Uses risks to: ID test techniques Determine how much testing is required Prioritize tests with high-priority risks first






7. Measure & analyze results of testing; Monitor document share results of testing; Report information on testing; Initiate actions to improve processes; Make decisions about testing






8. Components are integrated in the order in which they are developed






9. A functional testing approach in which test cases are designed based on business processes.






10. Examine changes made to an operational system cause defects.






11. Review documents (reqs architecture design etc.) ID conditions to be tested Design tests Assess testability of reqs ID infrastructure & tools






12. Tools used to keep track of different versions variants and releases of software and test artifacts (such as design documents test plans and test cases).






13. Incremental rollout Adapt processes testware etc. to fit with use of tool Adequate training Define guidelines for use of tool (from pilot project) Implement continuous improvement mechanism Monitor use of tool Implement ways to learn lessons






14. Testing performed to detect defects in interfaces and interation between integrated components. Also called "integration testing in the small".






15. Fixed - Won't Fix - Later - Remind - Duplicate - Incomplete - Not a Bug - Invalid etc.






16. Based on analysis of functional specifications of a system.






17. Integration approach components or subsystems are combined all at once rather than in stages.






18. Special-purpose software used to simulate a component called by the component under test






19. Used to test the functionality of software as mentioned in software requirement specifications.






20. Extract data from existing databases to be used during execution of tests make data anonymous generate new records populated with random data sorting records constructing a large number of similar records from a template






21. Testing performed at development organization's site but outside organization. (I.e. testing is performed by potential customers users or independent testing team)






22. Ability of software to provide appropriate performance relative to amount of resources used.






23. Nonfunctional testing including testing: ease of fixing defects - ease of meeting new requirements - ease of maintenance






24. Combining components or sytems into larger structural units or subsystems.






25. Tools used to store and manage incidents return phone defects failures or anomalies.






26. Incident Report - Identifier - Summary - Incident - Description - Impact






27. Requirements that determine the functionality of a software system.






28. Scheduling Tests Manage test activities Provide interfaces to different tools provide traceability of tests Log test results Prepare progress reports






29. Separation of testing responsibilities which encourages the accomplishment of objective testing






30. Planning & Control - Analysis and Design - Implementation and Execution - Evaluating Exit - Criteria and Reporting - Closure






31. A review not based on a formal documented procedure






32. The capability of a software product to provide functions that address explicit and implicit requirements from the product against specified conditions.






33. Testing software in its operational environment






34. A metric to calculate the number of SINGLE condition outcomes that can independently affect the decision outcome.






35. Allows storage of test input and expected results in one or more central data sources or databases.






36. A task of maintaining and controlling changes to all entities of a system.






37. Tests functional or nonfunctional attributes of a system or its components but without referring to the internal structure of the system or its components






38. Based on the generic iterative-incremental model. Teams work by dividing project tasks into small increments involving only short-term planning to implement various iterations






39. Assessment of changes required to different layers of documentation and software to implement a given change to the original requirements.






40. Metric used to calculate the number of combinations of all single condition outcomes within one statement that are executed by a test case.






41. Events that occurred during the testing process our investigation.






42. Informal testing technique in which test planning and execution run in parallel






43. A technique used to improve testing coverage by deliberately introducing faults in code.






44. A document that records the description of each event that occurs during the testing process and that requires further investigation






45. Black-box testing technique used to create groups of input conditions that create the same kind of output.






46. Component - Integration - System - Acceptance






47. A document that provides the structure for writing test cases.






48. Deviation of a software system from its expected delivery services or results






49. ID SW products - components - risks - objectives; Estimate effort; Consider approach; Ensure adherence to organization policies; Determine team structure; Set up test environment; Schedule testing tasks & activities






50. All possible combinations of input values and preconditions are tested.