SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Is the probability of some event A - assuming event B. Conditional probability is written P(A|B) - and is read 'the probability of A - given B'
Conditional distribution
Conditional probability
Residuals
P-value
2. Is a sample space over which a probability measure has been defined.
Sampling frame
A probability space
The Mean of a random variable
Prior probability
3. Have meaningful distances between measurements defined - but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit)
Interval measurements
Treatment
Observational study
Inferential statistics
4. Probability of rejecting a true null hypothesis.
Alpha value (Level of Significance)
Marginal probability
An experimental study
Statistical adjustment
5. Another name for elementary event.
Atomic event
Marginal distribution
Parameter - or 'statistical parameter'
A sampling distribution
6. Design of experiments - using blocking to reduce the influence of confounding variables - and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage - the experimenters a
Binary data
Step 2 of a statistical experiment
quantitative variables
hypothesis
7. Have both a meaningful zero value and the distances between different measurements defined; they provide the greatest flexibility in statistical methods that can be used for analyzing the data
Ratio measurements
quantitative variables
Type 2 Error
Inferential
8. (e.g. ? - b) are commonly used to denote unknown parameters (population parameters).
Inferential statistics
Seasonal effect
Greek letters
applied statistics
9. (also called statistical variability) is a measure of how diverse some data is. It can be expressed by the variance or the standard deviation.
Residuals
Statistical dispersion
Joint probability
A random variable
10. Is inference about a population from a random sample drawn from it or - more generally - about a random process from its observed behavior during a finite period of time.
Statistical inference
Alpha value (Level of Significance)
Power of a test
Law of Large Numbers
11. A numerical facsimilie or representation of a real-world phenomenon.
Statistical adjustment
Likert scale
Simulation
Probability
12. Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol. 4. Further examining the data set in secondary analyses - to suggest new hypotheses for future study. 5. Documenting and present
The average - or arithmetic mean
Correlation
Step 3 of a statistical experiment
Sampling Distribution
13. Working from a null hypothesis two basic forms of error are recognized:
An experimental study
Type I errors & Type II errors
Nominal measurements
Probability
14. A numerical measure that assesses the strength of a linear relationship between two variables.
Block
experimental studies and observational studies.
the population mean
Correlation coefficient
15. Is the probability of two events occurring together. The joint probability of A and B is written P(A and B) or P(A - B).
Joint probability
The standard deviation
Outlier
Type I errors
16. There are two major types of causal statistical studies: In both types of studies - the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies
Power of a test
Simpson's Paradox
methods of least squares
experimental studies and observational studies.
17. Interpretation of statistical information in that the assumption is that whatever is proposed as a cause has no effect on the variable being measured can often involve the development of a
variance of X
Null hypothesis
Independence or Statistical independence
hypotheses
18. Is the function that gives the probability distribution of a random variable. It cannot be negative - and its integral on the probability space is equal to 1.
Likert scale
methods of least squares
A Random vector
A Distribution function
19. Given two random variables X and Y - the joint distribution of X and Y is the probability distribution of X and Y together.
Divide the sum by the number of values.
Joint distribution
The average - or arithmetic mean
Average and arithmetic mean
20. Where the null hypothesis is falsely rejected giving a 'false positive'.
The Range
Type I errors
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Coefficient of determination
21. Is a parameter that indexes a family of probability distributions.
A Statistical parameter
Type 2 Error
Individual
The Range
22. (or atomic event) is an event with only one element. For example - when pulling a card out of a deck - 'getting the jack of spades' is an elementary event - while 'getting a king or an ace' is not.
Descriptive statistics
An Elementary event
Marginal distribution
Joint probability
23. Two events are independent if the outcome of one does not affect that of the other (for example - getting a 1 on one die roll does not affect the probability of getting a 1 on a second roll). Similarly - when we assert that two random variables are i
Mutual independence
Credence
Independence or Statistical independence
A data point
24. Are usually written in upper case roman letters: X - Y - etc.
Random variables
Count data
Probability and statistics
The Covariance between two random variables X and Y - with expected values E(X) =
25. Is a measure of the 'peakedness' of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations - as opposed to frequent modestly sized deviations.
Random variables
Estimator
Bias
Kurtosis
26. Consists of a number of independent trials repeated under identical conditions. On each trial - there are two possible outcomes.
A Distribution function
Binomial experiment
Alpha value (Level of Significance)
Valid measure
27. Gives the probability of events in a probability space.
observational study
A probability density function
A Probability measure
Nominal measurements
28. (pdfs) and probability mass functions are denoted by lower case letters - e.g. f(x).
Simple random sample
Type 2 Error
Count data
Probability density functions
29. Changes over time that show a regular periodicity in the data where regular means over a fixed interval; the time between repetitions is called the period.
Seasonal effect
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
Greek letters
Descriptive statistics
30. Describes the spread in the values of the sample statistic when many samples are taken.
Variability
Descriptive
A probability space
Joint probability
31. When there is an even number of values...
Marginal probability
That is the median value
Step 2 of a statistical experiment
Outlier
32. A collection of events is mutually independent if for any subset of the collection - the joint probability of all events occurring is equal to the product of the joint probabilities of the individual events. Think of the result of a series of coin-fl
observational study
Law of Large Numbers
Bias
Mutual independence
33. (or multivariate random variable) is a vector whose components are random variables on the same probability space.
A Random vector
the population cumulants
Type 2 Error
The median value
34. Are simply two different terms for the same thing. Add the given values
Average and arithmetic mean
A probability density function
Observational study
A Random vector
35. Error also refers to the extent to which individual observations in a sample differ from a central value - such as
the sample or population mean
Random variables
Skewness
Law of Parsimony
36. Is data arising from counting that can take only non-negative integer values.
Count data
Bias
Outlier
Sampling Distribution
37. Are written in corresponding lower case letters. For example x1 - x2 - ... - xn could be a sample corresponding to the random variable X.
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Residuals
An event
Particular realizations of a random variable
38. ?
The median value
Step 1 of a statistical experiment
A random variable
the population correlation
39. Statistical methods can be used for summarizing or describing a collection of data; this is called
Step 2 of a statistical experiment
descriptive statistics
the sample or population mean
A data point
40. To find the average - or arithmetic mean - of a set of numbers:
Type I errors
Divide the sum by the number of values.
An event
Type I errors & Type II errors
41. Are two related but separate academic disciplines. Statistical analysis often uses probability distributions - and the two topics are often studied together. However - probability theory contains much that is of mostly of mathematical interest and no
Probability and statistics
Divide the sum by the number of values.
f(z) - and its cdf by F(z).
A Random vector
42. A measure that is relevant or appropriate as a representation of that property.
Sample space
Simulation
Statistic
Valid measure
43. Is the probability distribution - under repeated sampling of the population - of a given statistic.
An experimental study
Prior probability
A sampling distribution
Cumulative distribution functions
44. S^2
Independence or Statistical independence
Probability density functions
Outlier
the population variance
45. A variable describes an individual by placing the individual into a category or a group.
Parameter - or 'statistical parameter'
The average - or arithmetic mean
Power of a test
Qualitative variable
46. A numerical measure that describes an aspect of a sample.
Joint distribution
Statistic
Ordinal measurements
the population variance
47. Used to reduce bias - this measure weights the more relevant information higher than less relevant info.
Statistical adjustment
Probability
Simpson's Paradox
Cumulative distribution functions
48. Descriptive statistics and inferential statistics (a.k.a. - predictive statistics) together comprise
A probability distribution
Posterior probability
applied statistics
Joint probability
49. Some commonly used symbols for sample statistics
Statistical inference
Treatment
Step 1 of a statistical experiment
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
50. Many statistical methods seek to minimize the mean-squared error - and these are called
the population correlation
Type 1 Error
methods of least squares
A probability space