Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The smallest software item that can be tested in isolation.






2. A test case design technique for a software component to ensure that the outcome of a decision point or branch in cod is tested.






3. Specific groups that represent a set of valid or invalid partitions for input conditions.






4. Measures amount of testing performed by a collection of test cases






5. A technique used to improve testing coverage by deliberately introducing faults in code.






6. Increased load (transations) used to test behavior of system under high volume.






7. Sequence in which instructions are executed through a component or system






8. Requirements that determine the functionality of a software system.






9. Commercial Off-The-Shelf products. Products developed for the general market as opposed to those developed for a specific customer.






10. Test case design technique used to identify bugs occurring on or around boundaries of equivalence partitions.






11. Special additions or changes to the environment required to run a test case.






12. Not related to the actual functionality e.g. reliability efficiency usability maintainability portability etc.






13. White-box design technique used to design test cases for a software component using LCSAJ.






14. Bug fault internal error problem etc. Flaw in software that causes it to fail to perform its required functions.






15. Incremental rollout Adapt processes testware etc. to fit with use of tool Adequate training Define guidelines for use of tool (from pilot project) Implement continuous improvement mechanism Monitor use of tool Implement ways to learn lessons






16. Enables testers to prove that functionality between two or more communicating systems or components is IAW requriements.






17. A review not based on a formal documented procedure






18. Incident Report - Identifier - Summary - Incident - Description - Impact






19. Requirements Analysis - Design - Coding - Integration - Implementation - Maintenance






20. Deviation of a software system from its expected delivery services or results






21. Components or subsystems are integrated and tested one or some at a time until all the components are subsystems are integrated and tested.






22. Testing performed to determine whether the system meets acceptance criteria






23. Human action that generates an incorrect result.






24. Events that occurred during the testing process our investigation.






25. Linear Code Sequence and Jump.






26. A metric to calculate the number of SINGLE condition outcomes that can independently affect the decision outcome.






27. A metric used to calculate the number of ALL condition or sub-expression outcomes in code that are executed by a test suite.






28. Ad hoc method of exposing bugs based on past knowledge and experience of experts (e.g. empty strings illegal characters empty files etc.).






29. All possible combinations of input values and preconditions are tested.






30. Testing performed to detect defects in interfaces and interation between integrated components. Also called "integration testing in the small".






31. Combining components or sytems into larger structural units or subsystems.






32. Ease with which software cna be modified to correct defects meet new requirements make future maintenance easier or adapt to a changed environment.






33. Black-box testing technique used to create groups of input conditions that create the same kind of output.






34. Record details of test cases executed Record order of execution record results






35. Waterfall iterative-incremental "V"






36. ID SW products - components - risks - objectives; Estimate effort; Consider approach; Ensure adherence to organization policies; Determine team structure; Set up test environment; Schedule testing tasks & activities






37. Response of the application to an input






38. A unique identifier for each incident report generated during test execution.






39. Scripting technique that uses data files to store test input expected results and keywords related to a software application being tested.






40. Execute individual & groups of test cases Record results Compare results with expected Report differenes between actual & expected Re-execute to verify fixes






41. A black-box test design technique used to identify possible causes of a problem by using the cause-effect diagram






42. Informal testing technique in which test planning and execution run in parallel






43. Testing software in its operational environment






44. Used to replace a component that calls another component.






45. Separation of testing responsibilities which encourages the accomplishment of objective testing






46. Fixed - Won't Fix - Later - Remind - Duplicate - Incomplete - Not a Bug - Invalid etc.






47. Black-box test design technique - test cases are designed from a decision table.






48. Tests interfaces between components and between integrated components and systems.






49. Ability of software to provide appropriate performance relative to amount of resources used.






50. Tools used to keep track of different versions variants and releases of software and test artifacts (such as design documents test plans and test cases).