Test your basic knowledge |

Instructions:
  • Answer 50 questions in 15 minutes.
  • If you are not ready to take this test, you can study here.
  • Match each statement with the correct term.
  • Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.

This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A group of test activities aimed at testing a component or system focused on a specific test objective - i.e. functional test - usability test - regression test etc. A test type may take place on one or more test levels or test phases. [After TMap]






2. The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software life cycle typically includes a concept phase - requirements phase - design phase - implementation phase - tes






3. The planning - estimating - monitoring and control of test activities - typically carried out by a test manager.






4. A type of test tool that is able to execute other software using an automated test script - e.g. capture/playback. [Fewster and Graham]






5. A document reporting on any event that occurred - e.g. during the testing - which requires investigation. [After IEEE 829]






6. Behavior of a component or system in response to erroneous input - from either a human user or from another component or system - or to an internal failure.






7. The organizational artifacts needed to perform testing - consisting of test environments - test tools - office environment and procedures.






8. A variable (whether stored within a component or outside) that is read by a component.






9. Separation of responsibilities - which encourages the accomplishment of objective testing. [After DO-178b]






10. Testing to determine the safety of a software product.






11. Testing the quality of the documentation - e.g. user guide or installation guide.






12. Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange - Internet).






13. Modification of a software product after delivery to correct defects - to improve performance or other attributes - or to adapt the product to a modified environment. [IEEE 1219]






14. A set of several test cases for a component or system under test - where the post condition of one test is often used as the precondition for the next one.






15. A five level staged framework for test process improvement - related to the Capability Maturity Model (CMM) - that describes the key elements of an effective test process.






16. The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.






17. Formal testing with respect to user needs - requirements - and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user - customers or other authorized entity to determine whether or n






18. The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal - and estimating the number of remaining defects. [IEEE 610]






19. A type of test tool that enables data to be selected from existing databases or created - generated - manipulated and edited for use in testing.






20. The percentage of paths that have been exercised by a test suite. 100% path coverage implies 100% LCSAJ coverage.






21. The capability of the software product to be attractive to the user. [ISO 9126] See also usability. The capability of the software to be understood - learned - used and attractive to the user when used under specified conditions. [ISO 9126]






22. A chronological record of relevant details about the execution of tests. [IEEE 829]






23. Code that cannot be reached and therefore is impossible to execute.






24. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. [Freedman and Weinberg - IEEE 1028] See also peer review. A review of a software work product by colleagues






25. A transition between two states of a component or system.






26. A pointer that references a location that is out of scope for that pointer or that does not exist. See also pointer. A data item that specifies the location of another data item; for example - a data item that specifies the address of the next employ






27. A program of activities designed to improve the performance and maturity of the organization's processes - and the result of such a program. [CMMI]






28. A test case without concrete (implementation level) values for input data and expected results. Logical operators are used; instances of the actual values are not yet defined and/or available. See also low level test case. A test case with concrete (






29. An environment containing hardware - instrumentation - simulators - software tools - and other support elements needed to conduct a test. [After IEEE 610]






30. A black box test design technique in which test cases are designed to execute user scenarios.






31. A black box test design technique in which test cases are designed based on boundary values. See also boundary value. An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either si






32. A specific category of risk related to the type of testing that can mitigate (control) that category. For example the risk of user-interactions being misunderstood can be mitigated by usability testing.






33. A tool that provides support to the review process. Typical features include review planning and tracking support - communication support - collaborative reviews and a repository for collecting and reporting of metrics.






34. The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126]






35. The process of assigning a number or category to an entity to describe an attribute of that entity. [ISO 14598]






36. A superior method or innovative practice that contributes to the improved performance of an organization under given context - usually recognized as 'best' by other peer organizations.






37. Analysis of source code carried out without execution of that software.






38. A series which appears to be random but is in fact generated according to some prearranged sequence.






39. The ability to identify related items in documentation and software - such as requirements with associated tests. See also horizontal traceability - vertical traceability. The tracing of requirements for a test level through the layers of test docume






40. The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. [After IEEE 610] See also efficiency. The capability of the software product to provide appropriat






41. The ease with which the software product can be transferred from one hardware or software environment to another. [ISO 9126]






42. A specification of the activity which a component or system being tested may experience in production. A load profile consists of a designated number of virtual users who process a defined set of transactions in a specified time period and according






43. An aggregation of hardware - software or both - that is designated for configuration management and treated as a single entity in the configuration management process. [IEEE 610]






44. Analysis of software artifacts - e.g. requirements or code - carried out without execution of these software artifacts.






45. The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. [ISO 9126] See also functionality testing. Testing based on an analysis of the specification of the functionality of a compo






46. An input for which the specification predicts a result.






47. Artifacts produced during the test process required to plan - design - and execute tests - such as documentation - scripts - inputs - expected results - set-up and clear-up procedures - files - databases - environment - and any additional software or






48. A software product that supports one or more test activities - such as planning and control - specification - building initial files and data - test execution and test analysis. [TMap] See also CAST. Acronym for Computer Aided Software Testing.






49. Procedure to derive and/or select test cases for nonfunctional testing based on an analysis of the specification of a component or system without reference to its internal structure. See also black box test design technique. Procedure to derive and/o






50. The process of demonstrating the ability to fulfill specified requirements. Note the term 'qualified' is used to designate the corresponding status. [ISO 9000]







Sorry!:) No result found.

Can you answer 50 questions in 15 minutes?


Let me suggest you:



Major Subjects



Tests & Exams


AP
CLEP
DSST
GRE
SAT
GMAT

Most popular tests