SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Var[X] :
Marginal probability
nominal - ordinal - interval - and ratio
variance of X
Divide the sum by the number of values.
2. Many statistical methods seek to minimize the mean-squared error - and these are called
Independent Selection
A data point
methods of least squares
That is the median value
3. Have both a meaningful zero value and the distances between different measurements defined; they provide the greatest flexibility in statistical methods that can be used for analyzing the data
Descriptive
Credence
That value is the median value
Ratio measurements
4. (also called statistical variability) is a measure of how diverse some data is. It can be expressed by the variance or the standard deviation.
Beta value
Statistical dispersion
The variance of a random variable
Probability density functions
5. Some commonly used symbols for population parameters
Null hypothesis
the population mean
Posterior probability
Placebo effect
6. Where the null hypothesis fails to be rejected and an actual difference between populations is missed giving a 'false negative'.
Simpson's Paradox
Type II errors
A sample
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
7. (cdfs) are denoted by upper case letters - e.g. F(x).
Sampling Distribution
Cumulative distribution functions
A likelihood function
Particular realizations of a random variable
8. Is the length of the smallest interval which contains all the data.
The Range
Lurking variable
f(z) - and its cdf by F(z).
Probability density
9. Samples are drawn from two different populations such that there is a matching of the first sample data drawn and a corresponding data value in the second sample data.
Marginal distribution
Cumulative distribution functions
Marginal probability
Dependent Selection
10. Is the probability distribution - under repeated sampling of the population - of a given statistic.
A Random vector
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
A sampling distribution
A data set
11. Can be a population parameter - a distribution parameter - an unobserved parameter (with different shades of meaning). In statistics - this is often a quantity to be estimated.
12. Is the study of the collection - organization - analysis - and interpretation of data. It deals with all aspects of this - including the planning of data collection in terms of the design of surveys and experiments.
Sampling
A data set
Statistics
Likert scale
13. Is that part of a population which is actually observed.
Alpha value (Level of Significance)
A sample
Variability
Statistical dispersion
14. Is the probability of some event A - assuming event B. Conditional probability is written P(A|B) - and is read 'the probability of A - given B'
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
Conditional probability
Descriptive
s-algebras
15. Are simply two different terms for the same thing. Add the given values
Trend
Coefficient of determination
Average and arithmetic mean
quantitative variables
16. Is the probability of an event - ignoring any information about other events. The marginal probability of A is written P(A). Contrast with conditional probability.
Null hypothesis
Type I errors
That value is the median value
Marginal probability
17. Given two jointly distributed random variables X and Y - the conditional probability distribution of Y given X (written 'Y | X') is the probability distribution of Y when X is known to be a particular value.
Bias
The Expected value
Conditional distribution
Nominal measurements
18. A numerical measure that describes an aspect of a population.
An event
Parameter
Particular realizations of a random variable
covariance of X and Y
19. Working from a null hypothesis two basic forms of error are recognized:
the population variance
Type I errors & Type II errors
Power of a test
Alpha value (Level of Significance)
20. A variable that has an important effect on the response variable and the relationship among the variables in a study but is not one of the explanatory variables studied either because it is unknown or not measured.
Step 3 of a statistical experiment
Statistical inference
A Random vector
Lurking variable
21. Is used in 'mathematical statistics' (alternatively - 'statistical theory') to study the sampling distributions of sample statistics and - more generally - the properties of statistical procedures. The use of any statistical method is valid when the
Correlation coefficient
Probability
Valid measure
Step 3 of a statistical experiment
22. Planning the research - including finding the number of replicates of the study - using the following information: preliminary estimates regarding the size of treatment effects - alternative hypotheses - and the estimated experimental variability. Co
Estimator
Lurking variable
Step 1 of a statistical experiment
inferential statistics
23. Is data that can take only two values - usually represented by 0 and 1.
Prior probability
the population mean
Binary data
Mutual independence
24. Also called correlation coefficient - is a numeric measure of the strength of linear relationship between two random variables (one can use it to quantify - for example - how shoe size and height are correlated in the population). An example is the P
Sampling
Individual
Correlation
Statistical adjustment
25. Is a measure of its statistical dispersion - indicating how far from the expected value its values typically are. The variance of random variable X is typically designated as - - or simply s2.
Type 2 Error
The variance of a random variable
Beta value
Type 1 Error
26. The errors - or difference between the estimated response y^i and the actual measured response yi - collectively
Step 2 of a statistical experiment
Alpha value (Level of Significance)
Residuals
The Expected value
27. There are four main levels of measurement used in statistics: Each of these have different degrees of usefulness in statistical research.
nominal - ordinal - interval - and ratio
A population or statistical population
Variable
applied statistics
28. Gives the probability distribution for a continuous random variable.
the population mean
Inferential statistics
A probability density function
Simpson's Paradox
29. Two events are independent if the outcome of one does not affect that of the other (for example - getting a 1 on one die roll does not affect the probability of getting a 1 on a second roll). Similarly - when we assert that two random variables are i
Independence or Statistical independence
Step 3 of a statistical experiment
Divide the sum by the number of values.
Treatment
30. Design of experiments - using blocking to reduce the influence of confounding variables - and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage - the experimenters a
Step 2 of a statistical experiment
Bias
nominal - ordinal - interval - and ratio
A Distribution function
31. Is the most commonly used measure of statistical dispersion. It is the square root of the variance - and is generally written s (sigma).
hypotheses
A Distribution function
Probability
The standard deviation
32. Statistics involve methods of using information from a sample to draw conclusions regarding the population.
Posterior probability
The Range
Inferential
Reliable measure
33. Is the set of possible outcomes of an experiment. For example - the sample space for rolling a six-sided die will be {1 - 2 - 3 - 4 - 5 - 6}.
Beta value
covariance of X and Y
The sample space
Greek letters
34. Can refer either to a sample not being representative of the population - or to the difference between the expected value of an estimator and the true value.
Placebo effect
Variability
Bias
Descriptive
35. Are written in corresponding lower case letters. For example x1 - x2 - ... - xn could be a sample corresponding to the random variable X.
Step 2 of a statistical experiment
Block
Joint distribution
Particular realizations of a random variable
36. (pdfs) and probability mass functions are denoted by lower case letters - e.g. f(x).
Probability density functions
The sample space
Atomic event
A Random vector
37. The result of a Bayesian analysis that encapsulates the combination of prior beliefs or information with observed data
Posterior probability
Ordinal measurements
Law of Parsimony
the population variance
38. Of a group of numbers is the center point of all those number values.
Step 1 of a statistical experiment
Observational study
descriptive statistics
The average - or arithmetic mean
39. Is the probability of two events occurring together. The joint probability of A and B is written P(A and B) or P(A - B).
Posterior probability
descriptive statistics
nominal - ordinal - interval - and ratio
Joint probability
40. To find the median value of a set of numbers: Arrange the numbers in numerical order. Locate the two middle numbers in the list. Find the average of those two middle values.
Count data
That value is the median value
descriptive statistics
Power of a test
41. Where the null hypothesis is falsely rejected giving a 'false positive'.
Sampling frame
A Random vector
Type I errors
Sampling Distribution
42. The collection of all possible outcomes in an experiment.
Greek letters
Sample space
A likelihood function
Parameter
43. Statistics involve methods of organizing - picturing - and summarizing information from samples or population.
Pairwise independence
quantitative variables
Count data
Descriptive
44. A numerical measure that assesses the strength of a linear relationship between two variables.
Correlation coefficient
Type 1 Error
categorical variables
Qualitative variable
45. Probability of accepting a false null hypothesis.
Statistics
Beta value
Joint distribution
Law of Parsimony
46. The proportion of the explained variation by a linear regression model in the total variation.
observational study
Cumulative distribution functions
descriptive statistics
Coefficient of determination
47. A numerical facsimilie or representation of a real-world phenomenon.
Simulation
Interval measurements
Conditional distribution
An event
48. Is defined as the expected value of random variable (X -
A Probability measure
hypothesis
The Covariance between two random variables X and Y - with expected values E(X) =
descriptive statistics
49. A collection of events is mutually independent if for any subset of the collection - the joint probability of all events occurring is equal to the product of the joint probabilities of the individual events. Think of the result of a series of coin-fl
Statistical dispersion
Mutual independence
A data set
Likert scale
50. A group of individuals sharing some common features that might affect the treatment.
Block
Trend
Variability
Simpson's Paradox