Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. The capability of the software product to enable the user to learn its application. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO






2. The capability of the software product to be upgraded to accommodate increased loads. [After Gerrard]






3. The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing. Testing based on an analysis of the specification of the functionality of a compo






4. Testing the methods and processes used to access and manage the data(base) - to ensure access methods - processes and data rules function as expected and that during access to the database - data is not corrupted or unexpectedly deleted - updated or






5. An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge - for example the minimum or maximum value of a range.






6. A test plan that typically addresses multiple test levels. See also test plan. A document describing the scope - approach - resources and schedule of intended test activities. It identifies amongst others test items - the features to be tested - the






7. The process of testing to determine the functionality of a software product.






8. Artifacts produced during the test process required to plan - design - and execute tests - such as documentation - scripts - inputs - expected results - set-up and clear-up procedures - files - databases - environment - and any additional software or






9. A description of a component's function in terms of its output values for specified input values under specified conditions - and required non-functional behavior (e.g. resource utilization).






10. A programming language in which executable test scripts are written - used by a test execution tool (e.g. a capture/playback tool).






11. A reason or purpose for designing and executing a test.






12. A variable (whether stored within a component or outside) that is written by a component.






13. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script. [After IEEE 829]






14. A document that consists of a test design specification - test case specification and/or test procedure specification.






15. The number of defects found by a test phase - divided by the number found by that test phase and any other means afterwards.






16. A tool that provides support to the review process. Typical features include review planning and tracking support - communication support - collaborative reviews and a repository for collecting and reporting of metrics.






17. A scale that constrains the type of data analysis that can be performed on it. [ISO 14598]






18. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






19. A scripting technique that stores test input and expected results in a table or spreadsheet - so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution to






20. An aggregation of hardware - software or both - that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]






21. A software tool used to carry out instrumentation.






22. The percentage of definition-use pairs that have been exercised by a test suite.






23. An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. See also integration testing. Testing performed to expose defects in the interfaces and in the interactions between integr






24. Operational testing in the acceptance test phase - typically performed in a simulated real-life operational environment by operator and/or administrator focusing on operational aspects - e.g. recoverability - resource-behavior - installability and te






25. The process of testing to determine the maintainability of a software product.






26. A test tool to perform automated test comparison of actual results with expected results.






27. A black box test design technique in which test cases are designed to execute user scenarios.






28. A technique used to characterize the elements of risk. The result of a hazard analysis will drive the methods used for development and testing of a system. See also risk analysis. The process of assessing identified risks to estimate their impact and






29. Data that exists (for example - in a database) before a test is executed - and that affects or is affected by the component or system under test.






30. A specific category of risk related to the type of testing that can mitigate (control) that category. For example the risk of user-interactions being misunderstood can be mitigated by usability testing.






31. Testing - either functional or non-functional - without reference to the internal structure of the component or system. Black-box test design technique Procedure to derive and/or select test cases based on an analysis of the specification - either fu






32. Testing of a component or system at specification or implementation level without execution of that software - e.g. reviews or static code analysis.






33. A path by which the original input to a process (e.g. data) can be traced back through the process - taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out. [After TMap]






34. Testing to determine the ease by which users with disabilities can use a component or system. [Gerrard]






35. An executable statement where a variable is assigned a value.






36. Acronym for Computer Aided Software Testing. See also test automation.The use of software to perform or support test activities - e.g. test management - test design - test execution and results checking.






37. A way of developing software where the test cases are developed - and often automated - before the software is developed to run those test cases.






38. A tool that carries out static code analysis. The tool checks source code - for certain properties such as conformance to coding standards - quality metrics or data flow anomalies.






39. The process of testing to determine the maintainability of a software product.






40. An abstract representation of all possible sequences of events (paths) in the execution through a component or system.






41. A tool that facilitates the recording and status tracking of defects and changes. They often have workflow-oriented facilities to track and control the allocation - correction and re-testing of defects and provide reporting facilities. See also incid






42. A model that shows the growth in reliability over time during continuous testing of a component or system as a result of the removal of defects that result in reliability failures.






43. A software product that supports one or more test activities - such as planning and control - specification - building initial files and data - test execution and test analysis. [TMap] See also CAST. Acronym for Computer Aided Software Testing.






44. The process of testing the installability of a software product. See also portability testing. The process of testing to determine the portability of a software product.






45. The insertion of additional code into the program in order to collect information about program behavior during execution - e.g. for measuring code coverage.






46. The period of time in a software development life cycle during which the components of a software product are executed - and the software product is evaluated to determine whether or not requirements have been satisfied. [IEEE 610]






47. Acronym for Computer Aided Software Engineering.






48. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






49. The behavior produced/observed when a component or system is tested.






50. A factor that could result in future negative consequences; usually expressed as impact and likelihood.