SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Are two related but separate academic disciplines. Statistical analysis often uses probability distributions - and the two topics are often studied together. However - probability theory contains much that is of mostly of mathematical interest and no
inferential statistics
Step 3 of a statistical experiment
Type II errors
Probability and statistics
2. Where the null hypothesis fails to be rejected and an actual difference between populations is missed giving a 'false negative'.
Parameter
Random variables
the population mean
Type II errors
3. Is a function of the known data that is used to estimate an unknown parameter; an estimate is the result from the actual application of the function to a particular set of data. The mean can be used as an estimator.
Inferential statistics
expected value of X
Estimator
inferential statistics
4. Describes the spread in the values of the sample statistic when many samples are taken.
Step 2 of a statistical experiment
Variability
Marginal distribution
s-algebras
5. Is a sample space over which a probability measure has been defined.
Pairwise independence
A probability space
Binomial experiment
Correlation coefficient
6. Gives the probability of events in a probability space.
Average and arithmetic mean
P-value
Conditional probability
A Probability measure
7. Long-term upward or downward movement over time.
Sampling
Trend
Experimental and observational studies
Beta value
8. Is the probability of some event A - assuming event B. Conditional probability is written P(A|B) - and is read 'the probability of A - given B'
Conditional probability
nominal - ordinal - interval - and ratio
Step 3 of a statistical experiment
Conditional distribution
9. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically - sometimes they are grouped together as
Observational study
Sampling
categorical variables
Sampling Distribution
10. Is the probability distribution - under repeated sampling of the population - of a given statistic.
The average - or arithmetic mean
Marginal distribution
A sampling distribution
Count data
11. In the long run - as the sample size increases - the relative frequencies of outcomes approach to the theoretical probability.
The average - or arithmetic mean
Law of Large Numbers
Mutual independence
hypotheses
12. The probability distribution of a sample statistic based on all the possible simple random samples of the same size from a population.
Sampling Distribution
covariance of X and Y
Variable
Simulation
13. Is denoted by - pronounced 'x bar'.
Sampling frame
Parameter
quantitative variables
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
14. A variable that has an important effect on the response variable and the relationship among the variables in a study but is not one of the explanatory variables studied either because it is unknown or not measured.
Statistic
A population or statistical population
Probability density functions
Lurking variable
15. Is the probability of two events occurring together. The joint probability of A and B is written P(A and B) or P(A - B).
Treatment
Joint distribution
Probability
Joint probability
16. Can be - for example - the possible outcomes of a dice roll (but it is not assigned a value). The distribution function of a random variable gives the probability of different results. We can also derive the mean and variance of a random variable.
Step 3 of a statistical experiment
A random variable
Parameter - or 'statistical parameter'
Statistical adjustment
17. Where the null hypothesis is falsely rejected giving a 'false positive'.
Type I errors
descriptive statistics
Binary data
methods of least squares
18. Is often denoted by placing a caret over the corresponding symbol - e.g. - pronounced 'theta hat'.
An estimate of a parameter
methods of least squares
Dependent Selection
Random variables
19. The probability of correctly detecting a false null hypothesis.
Null hypothesis
experimental studies and observational studies.
Power of a test
Joint probability
20. (or just likelihood) is a conditional probability function considered a function of its second argument with its first argument held fixed. For example - imagine pulling a numbered ball with the number k from a bag of n balls - numbered 1 to n. Then
Law of Parsimony
Dependent Selection
Statistical adjustment
A likelihood function
21. Is data that can take only two values - usually represented by 0 and 1.
Atomic event
Alpha value (Level of Significance)
Binary data
Statistical dispersion
22. Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol. 4. Further examining the data set in secondary analyses - to suggest new hypotheses for future study. 5. Documenting and present
Alpha value (Level of Significance)
applied statistics
Independent Selection
Step 3 of a statistical experiment
23. Is data arising from counting that can take only non-negative integer values.
Count data
A population or statistical population
The median value
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
24. ?r
Type 2 Error
Particular realizations of a random variable
Conditional distribution
the population cumulants
25. The result of a Bayesian analysis that encapsulates the combination of prior beliefs or information with observed data
Posterior probability
variance of X
Probability density functions
The Range
26. (pdfs) and probability mass functions are denoted by lower case letters - e.g. f(x).
Probability density functions
quantitative variables
A population or statistical population
A random variable
27. Is its expected value. The mean (or sample mean of a data set is just the average value.
The Mean of a random variable
Quantitative variable
Trend
Pairwise independence
28.
the population mean
An event
inferential statistics
Trend
29. Statistics involve methods of organizing - picturing - and summarizing information from samples or population.
Descriptive
Atomic event
The Covariance between two random variables X and Y - with expected values E(X) =
Type I errors
30. Two events are independent if the outcome of one does not affect that of the other (for example - getting a 1 on one die roll does not affect the probability of getting a 1 on a second roll). Similarly - when we assert that two random variables are i
The Covariance between two random variables X and Y - with expected values E(X) =
Independence or Statistical independence
A Statistical parameter
Type 1 Error
31. Consists of a number of independent trials repeated under identical conditions. On each trial - there are two possible outcomes.
the population cumulants
Simple random sample
Alpha value (Level of Significance)
Binomial experiment
32. (cdfs) are denoted by upper case letters - e.g. F(x).
hypotheses
Inferential
Cumulative distribution functions
Correlation coefficient
33. Is a measure of the asymmetry of the probability distribution of a real-valued random variable. Roughly speaking - a distribution has positive skew (right-skewed) if the higher tail is longer and negative skew (left-skewed) if the lower tail is longe
Skewness
Parameter
Mutual independence
Probability and statistics
34. In number theory - scatter plots of data generated by a distribution function may be transformed with familiar tools used in statistics to reveal underlying patterns - which may then lead to
Sampling frame
the population variance
Type 2 Error
hypotheses
35. Rejecting a true null hypothesis.
Type 2 Error
An experimental study
Law of Large Numbers
Type 1 Error
36. Uses patterns in the sample data to draw inferences about the population represented - accounting for randomness. These inferences may take the form of: answering yes/no questions about the data (hypothesis testing) - estimating numerical characteris
quantitative variables
Count data
Inferential statistics
A data point
37. A numerical measure that assesses the strength of a linear relationship between two variables.
Average and arithmetic mean
Correlation coefficient
A population or statistical population
Greek letters
38. Can refer either to a sample not being representative of the population - or to the difference between the expected value of an estimator and the true value.
Average and arithmetic mean
categorical variables
Bias
Correlation coefficient
39. Is a measure of the 'peakedness' of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations - as opposed to frequent modestly sized deviations.
Independence or Statistical independence
The average - or arithmetic mean
Kurtosis
Interval measurements
40. There are two major types of causal statistical studies: In both types of studies - the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies
Sampling Distribution
Average and arithmetic mean
experimental studies and observational studies.
descriptive statistics
41. Is used in 'mathematical statistics' (alternatively - 'statistical theory') to study the sampling distributions of sample statistics and - more generally - the properties of statistical procedures. The use of any statistical method is valid when the
Reliable measure
Probability
The median value
A likelihood function
42. Descriptive statistics and inferential statistics (a.k.a. - predictive statistics) together comprise
applied statistics
methods of least squares
Qualitative variable
Variability
43. Describes a characteristic of an individual to be measured or observed.
Bias
Coefficient of determination
Type I errors
Variable
44. E[X] :
Simulation
expected value of X
Joint distribution
A probability distribution
45. Have both a meaningful zero value and the distances between different measurements defined; they provide the greatest flexibility in statistical methods that can be used for analyzing the data
A Random vector
Ratio measurements
Nominal measurements
P-value
46. Is the exact middle value of a set of numbers Arrange the numbers in numerical order. Find the value in the middle of the list.
The median value
Likert scale
Bias
Step 2 of a statistical experiment
47. To find the median value of a set of numbers: Arrange the numbers in numerical order. Locate the two middle numbers in the list. Find the average of those two middle values.
methods of least squares
A Statistical parameter
Law of Large Numbers
That value is the median value
48. Another name for elementary event.
The Mean of a random variable
Atomic event
An estimate of a parameter
Probability
49. Is a process of selecting observations to obtain knowledge about a population. There are many methods to choose on which sample to do the observations.
Binomial experiment
Simpson's Paradox
Sampling
A Statistical parameter
50. The proportion of the explained variation by a linear regression model in the total variation.
Simpson's Paradox
Coefficient of determination
applied statistics
The Covariance between two random variables X and Y - with expected values E(X) =