SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Where the null hypothesis is falsely rejected giving a 'false positive'.
Type I errors
Atomic event
An experimental study
Mutual independence
2. (or atomic event) is an event with only one element. For example - when pulling a card out of a deck - 'getting the jack of spades' is an elementary event - while 'getting a king or an ace' is not.
A probability distribution
An Elementary event
the population correlation
Type I errors & Type II errors
3. When you have two or more competing models - choose the simpler of the two models.
Law of Parsimony
Parameter - or 'statistical parameter'
A statistic
Sampling Distribution
4. In particular - the pdf of the standard normal distribution is denoted by
Treatment
f(z) - and its cdf by F(z).
Inferential
Type I errors
5. A consistent - repeated deviation of the sample statistic from the population parameter in the same direction when many samples are taken.
The Mean of a random variable
hypothesis
Greek letters
Bias
6. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically - sometimes they are grouped together as
categorical variables
An Elementary event
Joint probability
Descriptive
7. Is the probability distribution - under repeated sampling of the population - of a given statistic.
A sampling distribution
A probability density function
Power of a test
Sampling frame
8. Are usually written in upper case roman letters: X - Y - etc.
Confounded variables
Type 1 Error
Random variables
Credence
9. Are written in corresponding lower case letters. For example x1 - x2 - ... - xn could be a sample corresponding to the random variable X.
Simpson's Paradox
Kurtosis
Particular realizations of a random variable
Skewness
10. The probability of correctly detecting a false null hypothesis.
Power of a test
A Distribution function
A probability space
Valid measure
11. Can refer either to a sample not being representative of the population - or to the difference between the expected value of an estimator and the true value.
Confounded variables
Bias
Kurtosis
Residuals
12. Are usually written with upper case calligraphic (e.g. F for the set of sets on which we define the probability P)
s-algebras
A sampling distribution
expected value of X
The Expected value
13. In Bayesian inference - this represents prior beliefs or other information that is available before new data or observations are taken into account.
Descriptive statistics
A data point
Null hypothesis
Prior probability
14. Samples are drawn from two different populations such that the sample data drawn from one population is completely unrelated to the selection of sample data from the other population.
Inferential
Independent Selection
Marginal probability
Mutual independence
15. Is the most commonly used measure of statistical dispersion. It is the square root of the variance - and is generally written s (sigma).
A Random vector
The standard deviation
Simpson's Paradox
the population mean
16. Interpretation of statistical information in that the assumption is that whatever is proposed as a cause has no effect on the variable being measured can often involve the development of a
Probability density
Average and arithmetic mean
Null hypothesis
Inferential statistics
17. Is a measure of the 'peakedness' of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations - as opposed to frequent modestly sized deviations.
Parameter - or 'statistical parameter'
experimental studies and observational studies.
Kurtosis
Observational study
18.
A probability space
the population mean
covariance of X and Y
Parameter - or 'statistical parameter'
19. A subjective estimate of probability.
Statistic
Sampling Distribution
Descriptive
Credence
20. There are four main levels of measurement used in statistics: Each of these have different degrees of usefulness in statistical research.
Observational study
the sample or population mean
nominal - ordinal - interval - and ratio
covariance of X and Y
21. S^2
Variable
Descriptive statistics
Law of Parsimony
the population variance
22. Consists of a number of independent trials repeated under identical conditions. On each trial - there are two possible outcomes.
Binomial experiment
nominal - ordinal - interval - and ratio
Type I errors
Dependent Selection
23. Var[X] :
variance of X
Beta value
Simpson's Paradox
Treatment
24. Given two jointly distributed random variables X and Y - the conditional probability distribution of Y given X (written 'Y | X') is the probability distribution of Y when X is known to be a particular value.
expected value of X
Conditional distribution
Individual
Variability
25. Is a sample space over which a probability measure has been defined.
Outlier
A probability space
Random variables
Binomial experiment
26. A scale that represents an ordinal scale such as looks on a scale from 1 to 10.
Residuals
Likert scale
Correlation coefficient
quantitative variables
27. Is a parameter that indexes a family of probability distributions.
Trend
Variability
A Statistical parameter
Coefficient of determination
28. Is data arising from counting that can take only non-negative integer values.
Outlier
Marginal probability
Count data
Trend
29. Is the result of applying a statistical algorithm to a data set. It can also be described as an observable random variable.
Credence
observational study
A statistic
Seasonal effect
30. Of a group of numbers is the center point of all those number values.
Block
Cumulative distribution functions
A Statistical parameter
The average - or arithmetic mean
31. A variable describes an individual by placing the individual into a category or a group.
Average and arithmetic mean
Parameter - or 'statistical parameter'
Step 1 of a statistical experiment
Qualitative variable
32. In number theory - scatter plots of data generated by a distribution function may be transformed with familiar tools used in statistics to reveal underlying patterns - which may then lead to
hypotheses
Simpson's Paradox
Trend
the population variance
33. There are two major types of causal statistical studies: In both types of studies - the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies
Random variables
Sample space
the population variance
experimental studies and observational studies.
34. Are simply two different terms for the same thing. Add the given values
Probability
Null hypothesis
Probability and statistics
Average and arithmetic mean
35. The errors - or difference between the estimated response y^i and the actual measured response yi - collectively
Residuals
Random variables
Descriptive
Bias
36. Have both a meaningful zero value and the distances between different measurements defined; they provide the greatest flexibility in statistical methods that can be used for analyzing the data
Variability
Individual
Ratio measurements
A Probability measure
37. Is a subset of the sample space - to which a probability can be assigned. For example - on rolling a die - 'getting a five or a six' is an event (with a probability of one third if the die is fair).
Sampling
An event
Correlation
Probability density functions
38. Cov[X - Y] :
the sample or population mean
categorical variables
covariance of X and Y
An estimate of a parameter
39. When there is an even number of values...
Coefficient of determination
the population mean
That is the median value
The Mean of a random variable
40. A sample selected in such a way that each individual is equally likely to be selected as well as any group of size n is equally likely to be selected.
Individual
Simple random sample
Conditional probability
Greek letters
41. ?r
the population cumulants
Statistics
categorical variables
hypotheses
42. Statistical methods can be used for summarizing or describing a collection of data; this is called
Random variables
descriptive statistics
Probability and statistics
Sampling Distribution
43. Two variables such that their effects on the response variable cannot be distinguished from each other.
Confounded variables
Average and arithmetic mean
Bias
covariance of X and Y
44. Is the length of the smallest interval which contains all the data.
The standard deviation
A statistic
The Range
the population variance
45. The standard deviation of a sampling distribution.
applied statistics
Type I errors & Type II errors
Sampling
Standard error
46. The collection of all possible outcomes in an experiment.
expected value of X
Type I errors
Sample space
Sampling
47. (or multivariate random variable) is a vector whose components are random variables on the same probability space.
A Probability measure
Type I errors & Type II errors
Binomial experiment
A Random vector
48. Is its expected value. The mean (or sample mean of a data set is just the average value.
Type 1 Error
Joint distribution
f(z) - and its cdf by F(z).
The Mean of a random variable
49. Are two related but separate academic disciplines. Statistical analysis often uses probability distributions - and the two topics are often studied together. However - probability theory contains much that is of mostly of mathematical interest and no
Outlier
Probability and statistics
Type I errors
Standard error
50. Is defined as the expected value of random variable (X -
Individual
That is the median value
The Covariance between two random variables X and Y - with expected values E(X) =
Law of Large Numbers