SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Descriptive statistics and inferential statistics (a.k.a. - predictive statistics) together comprise
Independent Selection
the population mean
applied statistics
Kurtosis
2. When there is an even number of values...
Quantitative variable
That is the median value
Bias
A sample
3. To prove the guiding theory further - these predictions are tested as well - as part of the scientific method. If the inference holds true - then the descriptive statistics of the new data increase the soundness of that
A probability density function
the population variance
Correlation coefficient
hypothesis
4. Is a set of entities about which statistical inferences are to be drawn - often based on random sampling. One can also talk about a population of measurements or values.
Marginal probability
Step 3 of a statistical experiment
hypothesis
A population or statistical population
5. Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol. 4. Further examining the data set in secondary analyses - to suggest new hypotheses for future study. 5. Documenting and present
Likert scale
Correlation
Cumulative distribution functions
Step 3 of a statistical experiment
6. A numerical measure that describes an aspect of a sample.
An estimate of a parameter
Dependent Selection
Statistic
Treatment
7. Is often denoted by placing a caret over the corresponding symbol - e.g. - pronounced 'theta hat'.
Parameter
An estimate of a parameter
Alpha value (Level of Significance)
nominal - ordinal - interval - and ratio
8. Is a measure of the 'peakedness' of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations - as opposed to frequent modestly sized deviations.
the population variance
That is the median value
the population correlation
Kurtosis
9. (or just likelihood) is a conditional probability function considered a function of its second argument with its first argument held fixed. For example - imagine pulling a numbered ball with the number k from a bag of n balls - numbered 1 to n. Then
Sampling Distribution
The Mean of a random variable
A likelihood function
Inferential
10. The collection of all possible outcomes in an experiment.
Confounded variables
Descriptive
Sample space
the sample or population mean
11. Is the probability of two events occurring together. The joint probability of A and B is written P(A and B) or P(A - B).
Joint probability
Individual
Sample space
A sampling distribution
12. Probability of rejecting a true null hypothesis.
Credence
Mutual independence
Alpha value (Level of Significance)
Ordinal measurements
13. A sample selected in such a way that each individual is equally likely to be selected as well as any group of size n is equally likely to be selected.
Simple random sample
That is the median value
P-value
Residuals
14. The result of a Bayesian analysis that encapsulates the combination of prior beliefs or information with observed data
Posterior probability
Mutual independence
Variability
the population mean
15. Is a process of selecting observations to obtain knowledge about a population. There are many methods to choose on which sample to do the observations.
Sampling
Marginal probability
Skewness
Joint probability
16. Cov[X - Y] :
the population cumulants
P-value
covariance of X and Y
Prior probability
17. Describes the spread in the values of the sample statistic when many samples are taken.
Descriptive statistics
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Variability
An experimental study
18. A consistent - repeated deviation of the sample statistic from the population parameter in the same direction when many samples are taken.
Ratio measurements
A likelihood function
Bias
Null hypothesis
19. Is the probability of some event A - assuming event B. Conditional probability is written P(A|B) - and is read 'the probability of A - given B'
Law of Large Numbers
Type I errors
applied statistics
Conditional probability
20. Describes a characteristic of an individual to be measured or observed.
Probability and statistics
Variable
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Type I errors
21. Statistics involve methods of organizing - picturing - and summarizing information from samples or population.
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
Standard error
Descriptive
The Range
22. Design of experiments - using blocking to reduce the influence of confounding variables - and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage - the experimenters a
Step 2 of a statistical experiment
Pairwise independence
variance of X
Type I errors
23. Given two jointly distributed random variables X and Y - the conditional probability distribution of Y given X (written 'Y | X') is the probability distribution of Y when X is known to be a particular value.
Particular realizations of a random variable
Conditional distribution
Statistical inference
hypotheses
24. Given two jointly distributed random variables X and Y - the marginal distribution of X is simply the probability distribution of X ignoring information about Y.
Marginal distribution
Descriptive statistics
The Expected value
Atomic event
25. In number theory - scatter plots of data generated by a distribution function may be transformed with familiar tools used in statistics to reveal underlying patterns - which may then lead to
Type 2 Error
Variability
Confounded variables
hypotheses
26. When you have two or more competing models - choose the simpler of the two models.
Power of a test
Law of Parsimony
A probability density function
Type 2 Error
27. Any specific experimental condition applied to the subjects
Alpha value (Level of Significance)
the sample or population mean
Treatment
Interval measurements
28. The probability distribution of a sample statistic based on all the possible simple random samples of the same size from a population.
Coefficient of determination
Sampling Distribution
Atomic event
Cumulative distribution functions
29. To find the average - or arithmetic mean - of a set of numbers:
f(z) - and its cdf by F(z).
Average and arithmetic mean
Divide the sum by the number of values.
Type 1 Error
30. Samples are drawn from two different populations such that there is a matching of the first sample data drawn and a corresponding data value in the second sample data.
A population or statistical population
Dependent Selection
Probability density functions
Step 3 of a statistical experiment
31. (or expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff ('value'). Thus - it represents the average amount one 'expects' to win per bet if bets with identical odds are re
Individual
Joint distribution
Cumulative distribution functions
The Expected value
32. A collection of events is mutually independent if for any subset of the collection - the joint probability of all events occurring is equal to the product of the joint probabilities of the individual events. Think of the result of a series of coin-fl
quantitative variables
An Elementary event
Mutual independence
Atomic event
33. Is inference about a population from a random sample drawn from it or - more generally - about a random process from its observed behavior during a finite period of time.
Likert scale
Statistical inference
Descriptive
Step 2 of a statistical experiment
34.
variance of X
the population mean
Treatment
Trend
35. Is data that can take only two values - usually represented by 0 and 1.
Binary data
f(z) - and its cdf by F(z).
The Expected value
Joint distribution
36. A pairwise independent collection of random variables is a set of random variables any two of which are independent.
That is the median value
Likert scale
Pairwise independence
Reliable measure
37. Statistical methods can be used for summarizing or describing a collection of data; this is called
categorical variables
Treatment
That is the median value
descriptive statistics
38. Also called correlation coefficient - is a numeric measure of the strength of linear relationship between two random variables (one can use it to quantify - for example - how shoe size and height are correlated in the population). An example is the P
Bias
Divide the sum by the number of values.
Correlation
The variance of a random variable
39. Is the function that gives the probability distribution of a random variable. It cannot be negative - and its integral on the probability space is equal to 1.
A Distribution function
Correlation coefficient
Marginal probability
A probability space
40. A numerical measure that assesses the strength of a linear relationship between two variables.
Correlation coefficient
Statistical dispersion
A statistic
Simpson's Paradox
41. ?r
the population cumulants
The Mean of a random variable
Type 1 Error
experimental studies and observational studies.
42. Is a function that gives the probability of all elements in a given space: see List of probability distributions
inferential statistics
Prior probability
A probability distribution
the population cumulants
43. Is a typed measurement - it can be a boolean value - a real number - a vector (in which case it's also called a data vector) - etc.
That value is the median value
Marginal distribution
A sample
A data point
44. Where the null hypothesis fails to be rejected and an actual difference between populations is missed giving a 'false negative'.
Type II errors
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Pairwise independence
quantitative variables
45. A common goal for a statistical research project is to investigate causality - and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables or response.
Experimental and observational studies
the population cumulants
Ordinal measurements
Individual
46. The proportion of the explained variation by a linear regression model in the total variation.
the population variance
Trend
Coefficient of determination
Lurking variable
47. The probability of the observed value or something more extreme under the assumption that the null hypothesis is true.
Experimental and observational studies
Prior probability
P-value
That is the median value
48. Changes over time that show a regular periodicity in the data where regular means over a fixed interval; the time between repetitions is called the period.
Seasonal effect
Conditional probability
the population mean
Average and arithmetic mean
49. Probability of accepting a false null hypothesis.
Beta value
A Random vector
Marginal distribution
Law of Large Numbers
50. Are two related but separate academic disciplines. Statistical analysis often uses probability distributions - and the two topics are often studied together. However - probability theory contains much that is of mostly of mathematical interest and no
A probability density function
Probability and statistics
Simulation
Conditional distribution