Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Enables testers to prove that functionality between two or more communicating systems or components is IAW requriements.






2. Behavior or response of a software application that you observe when you execute the action steps in the test case.






3. Testing software components that are separately testable. Also module program and unit testing.






4. Develop & proiroitize test cases Create groups of test cases Set up test environment






5. Review documents (reqs architecture design etc.) ID conditions to be tested Design tests Assess testability of reqs ID infrastructure & tools






6. Inputs - Expected Results - Actual Results - Anomalies - Date & Time - Procedure Step - Attempts to repeat - Testers - Observers






7. Severity - Priority






8. Execute individual & groups of test cases Record results Compare results with expected Report differenes between actual & expected Re-execute to verify fixes






9. A technique used to improve testing coverage by deliberately introducing faults in code.






10. Separation of testing responsibilities which encourages the accomplishment of objective testing






11. Uses risks to: ID test techniques Determine how much testing is required Prioritize tests with high-priority risks first






12. A functional testing approach in which test cases are designed based on business processes.






13. Component - Integration - System - Acceptance






14. Informal testing technique in which test planning and execution run in parallel






15. Measures amount of testing performed by a collection of test cases






16. Extract data from existing databases to be used during execution of tests make data anonymous generate new records populated with random data sorting records constructing a large number of similar records from a template






17. Black-box techniques used to derive test cases drawing on knowledge intuition and skill of individuals.






18. Ease with which software cna be modified to correct defects meet new requirements make future maintenance easier or adapt to a changed environment.






19. Frequency of tests failing per unit of measure (e.g. time number of transactions test cases executed.)






20. Specific groups that represent a set of valid or invalid partitions for input conditions.






21. White-box design technique used to design test cases for a software component using LCSAJ.






22. Bug fault internal error problem etc. Flaw in software that causes it to fail to perform its required functions.






23. Components or subsystems are integrated and tested one or some at a time until all the components are subsystems are integrated and tested.






24. Tests functional or nonfunctional attributes of a system or its components but without referring to the internal structure of the system or its components






25. Based on the generic iterative-incremental model. Teams work by dividing project tasks into small increments involving only short-term planning to implement various iterations






26. Incremental rollout Adapt processes testware etc. to fit with use of tool Adequate training Define guidelines for use of tool (from pilot project) Implement continuous improvement mechanism Monitor use of tool Implement ways to learn lessons






27. Integration Approach: A frame or backbone is created and components are progressively integrated into it.






28. Waterfall iterative-incremental "V"






29. Testing performed based on the contract between a customer and the development organization. Customer uses results of the test to determine acceptance of software.






30. A code metric that specifies the number of independent paths through a program. Enables identification of complex (and therefore high-risk) areas of code.






31. Check to make sure a system adheres to a defined set of standards conventions or regulations in laws and similar specifications.






32. Scheduling Tests Manage test activities Provide interfaces to different tools provide traceability of tests Log test results Prepare progress reports






33. Abilitiy of software to collaborate with one or more specified systems subsystem or components.






34. Special-purpose software used to simulate a component called by the component under test






35. A set of conditions that a system needs to meet in order to be accepted by end users






36. Find defects in code while the software application being tested is running.






37. Testing performed to determine whether the system meets acceptance criteria






38. Ability of software to provide appropriate performance relative to amount of resources used.






39. The capability of a software product to provide functions that address explicit and implicit requirements from the product against specified conditions.






40. A type of review that involves visual examination of documents to detect defects such as violations of development standards and non-conformance to higher-level documentation.






41. Record details of test cases executed Record order of execution record results






42. A black-box test design technique used to identify possible causes of a problem by using the cause-effect diagram






43. Linear Code Sequence and Jump.






44. A document that records the description of each event that occurs during the testing process and that requires further investigation






45. Special-purpose software used to simulate a component that calls the component under test






46. Tracing requirements for a level of testing using test documentation from the test plan to the test script.






47. Input or combination of inputs required to test software.






48. Response of the application to an input






49. A component of the incident report that determines the actual effect of the incident on the software and its users.






50. Commercial Off-The-Shelf products. Products developed for the general market as opposed to those developed for a specific customer.