Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A test case design technique for a software component to ensure that the outcome of a decision point or branch in cod is tested.






2. Informal testing technique in which test planning and execution run in parallel






3. Events that occurred during the testing process our investigation.






4. Occurrences that happen before and after an unexpected event






5. Based on the generic iterative-incremental model. Teams work by dividing project tasks into small increments involving only short-term planning to implement various iterations






6. Ease with which software cna be modified to correct defects meet new requirements make future maintenance easier or adapt to a changed environment.






7. Black-box test design technique - test cases are designed from a decision table.






8. Tools used to provide support for and automation of managing various testing documents such as test policy test strategy and test plan






9. The capability of a software product to provide agreed and correct output with the required degree of precision






10. Components at lowest level are tested first with higher-level components simulated by drivers. Tested components are then used to test higher-level components. Repeat until all levels have been tested.






11. Behavior or response of a software application that you observe when you execute the action steps in the test case.






12. Ad hoc method of exposing bugs based on past knowledge and experience of experts (e.g. empty strings illegal characters empty files etc.).






13. A type of review that involves visual examination of documents to detect defects such as violations of development standards and non-conformance to higher-level documentation.






14. Input or combination of inputs required to test software.






15. Measure & analyze results of testing; Monitor document share results of testing; Report information on testing; Initiate actions to improve processes; Make decisions about testing






16. Metric used to calculate the number of combinations of all single condition outcomes within one statement that are executed by a test case.






17. An event or item that can be tested using one or more test cases






18. Used to test the functionality of software as mentioned in software requirement specifications.






19. Enables testers to prove that functionality between two or more communicating systems or components is IAW requriements.






20. Frequency of tests failing per unit of measure (e.g. time number of transactions test cases executed.)






21. Test case design technique used to identify bugs occurring on or around boundaries of equivalence partitions.






22. Not related to the actual functionality e.g. reliability efficiency usability maintainability portability etc.






23. A document that provides the structure for writing test cases.






24. A document that records the description of each event that occurs during the testing process and that requires further investigation






25. Specific groups that represent a set of valid or invalid partitions for input conditions.






26. Waterfall iterative-incremental "V"






27. ID SW products - components - risks - objectives; Estimate effort; Consider approach; Ensure adherence to organization policies; Determine team structure; Set up test environment; Schedule testing tasks & activities






28. Testing performed at development organization's site but outside organization. (I.e. testing is performed by potential customers users or independent testing team)






29. Used to replace a component that calls another component.






30. Testing software components that are separately testable. Also module program and unit testing.






31. Testing performed based on the contract between a customer and the development organization. Customer uses results of the test to determine acceptance of software.






32. Ability of software to provide appropriate performance relative to amount of resources used.






33. Response of the application to an input






34. Conditions required to begin testing activities.






35. A code metric that specifies the number of independent paths through a program. Enables identification of complex (and therefore high-risk) areas of code.






36. The process of finding analyzing and removing causes of failure in a software product.






37. Operational testing performed at an _external_ site without involvement of the developing organization.






38. Planning & Control - Analysis and Design - Implementation and Execution - Evaluating Exit - Criteria and Reporting - Closure






39. Sequence in which data items are accessed or modified by code.






40. Incremental rollout Adapt processes testware etc. to fit with use of tool Adequate training Define guidelines for use of tool (from pilot project) Implement continuous improvement mechanism Monitor use of tool Implement ways to learn lessons






41. Scheduling Tests Manage test activities Provide interfaces to different tools provide traceability of tests Log test results Prepare progress reports






42. The capability of a software product to provide functions that address explicit and implicit requirements from the product against specified conditions.






43. Actual inputs required to execute a test case






44. Bug fault internal error problem etc. Flaw in software that causes it to fail to perform its required functions.






45. Integration Approach: A frame or backbone is created and components are progressively integrated into it.






46. A metric used to calculate the number of ALL condition or sub-expression outcomes in code that are executed by a test suite.






47. Uses risks to: ID test techniques Determine how much testing is required Prioritize tests with high-priority risks first






48. Testing performed to determine whether the system meets acceptance criteria






49. Conditions ensuring testing process is complete and the object being tested is ready for next stage.






50. Measures amount of testing performed by a collection of test cases







Sorry!:) No result found.

Can you answer 50 questions in 15 minutes?


Let me suggest you:



Major Subjects



Tests & Exams


AP
CLEP
DSST
GRE
SAT
GMAT

Most popular tests