SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. (or expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff ('value'). Thus - it represents the average amount one 'expects' to win per bet if bets with identical odds are re
The Expected value
Alpha value (Level of Significance)
experimental studies and observational studies.
Kurtosis
2. Is the function that gives the probability distribution of a random variable. It cannot be negative - and its integral on the probability space is equal to 1.
Statistical dispersion
A Distribution function
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Average and arithmetic mean
3. Statistical methods can be used for summarizing or describing a collection of data; this is called
The standard deviation
Variable
descriptive statistics
Step 2 of a statistical experiment
4. Is a parameter that indexes a family of probability distributions.
A sample
A Statistical parameter
Inferential statistics
A sampling distribution
5. Where the null hypothesis fails to be rejected and an actual difference between populations is missed giving a 'false negative'.
Divide the sum by the number of values.
Type II errors
Type I errors & Type II errors
Confounded variables
6. Given two random variables X and Y - the joint distribution of X and Y is the probability distribution of X and Y together.
Skewness
That is the median value
Mutual independence
Joint distribution
7. Is the most commonly used measure of statistical dispersion. It is the square root of the variance - and is generally written s (sigma).
The standard deviation
A Distribution function
A statistic
Step 1 of a statistical experiment
8. Is the probability of an event - ignoring any information about other events. The marginal probability of A is written P(A). Contrast with conditional probability.
Type I errors
That value is the median value
Marginal probability
Joint distribution
9. Any specific experimental condition applied to the subjects
Individual
variance of X
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Treatment
10. A sample selected in such a way that each individual is equally likely to be selected as well as any group of size n is equally likely to be selected.
Simple random sample
Likert scale
Seasonal effect
A Statistical parameter
11. Cov[X - Y] :
The median value
P-value
Individual
covariance of X and Y
12. ?r
Count data
Statistic
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
the population cumulants
13. A subjective estimate of probability.
Greek letters
Correlation coefficient
Credence
s-algebras
14. (or just likelihood) is a conditional probability function considered a function of its second argument with its first argument held fixed. For example - imagine pulling a numbered ball with the number k from a bag of n balls - numbered 1 to n. Then
Standard error
A likelihood function
An Elementary event
An experimental study
15. Some commonly used symbols for sample statistics
Probability density functions
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Parameter
A probability distribution
16. The collection of all possible outcomes in an experiment.
The variance of a random variable
Simpson's Paradox
Posterior probability
Sample space
17. Of a group of numbers is the center point of all those number values.
The Covariance between two random variables X and Y - with expected values E(X) =
Treatment
The median value
The average - or arithmetic mean
18. In the long run - as the sample size increases - the relative frequencies of outcomes approach to the theoretical probability.
The standard deviation
Likert scale
Law of Large Numbers
Statistic
19. Where the null hypothesis is falsely rejected giving a 'false positive'.
A Probability measure
the sample or population mean
Statistic
Type I errors
20. Is a function that gives the probability of all elements in a given space: see List of probability distributions
observational study
Marginal distribution
A statistic
A probability distribution
21. S^2
Individual
Lurking variable
the population variance
descriptive statistics
22.
hypotheses
the population mean
The Covariance between two random variables X and Y - with expected values E(X) =
Lurking variable
23. Is a subset of the sample space - to which a probability can be assigned. For example - on rolling a die - 'getting a five or a six' is an event (with a probability of one third if the die is fair).
An event
Mutual independence
The Covariance between two random variables X and Y - with expected values E(X) =
Variability
24. Consists of a number of independent trials repeated under identical conditions. On each trial - there are two possible outcomes.
Binomial experiment
the population cumulants
A likelihood function
Standard error
25. A numerical facsimilie or representation of a real-world phenomenon.
observational study
Power of a test
Simulation
Joint probability
26. Design of experiments - using blocking to reduce the influence of confounding variables - and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage - the experimenters a
An experimental study
Step 2 of a statistical experiment
quantitative variables
hypotheses
27. Given two jointly distributed random variables X and Y - the conditional probability distribution of Y given X (written 'Y | X') is the probability distribution of Y when X is known to be a particular value.
variance of X
Particular realizations of a random variable
Marginal probability
Conditional distribution
28. Planning the research - including finding the number of replicates of the study - using the following information: preliminary estimates regarding the size of treatment effects - alternative hypotheses - and the estimated experimental variability. Co
Coefficient of determination
inferential statistics
Nominal measurements
Step 1 of a statistical experiment
29. Have imprecise differences between consecutive values - but have a meaningful order to those values
Experimental and observational studies
Ordinal measurements
Statistic
expected value of X
30. The probability of the observed value or something more extreme under the assumption that the null hypothesis is true.
Variability
A Probability measure
the population mean
P-value
31. Working from a null hypothesis two basic forms of error are recognized:
Placebo effect
f(z) - and its cdf by F(z).
Credence
Type I errors & Type II errors
32. The probability distribution of a sample statistic based on all the possible simple random samples of the same size from a population.
Sampling Distribution
expected value of X
Correlation coefficient
Statistical dispersion
33. A numerical measure that describes an aspect of a population.
Power of a test
s-algebras
Parameter
Sampling
34. Is inference about a population from a random sample drawn from it or - more generally - about a random process from its observed behavior during a finite period of time.
Statistical inference
Conditional probability
the population cumulants
Law of Parsimony
35. Is often denoted by placing a caret over the corresponding symbol - e.g. - pronounced 'theta hat'.
Variable
Statistics
An estimate of a parameter
The average - or arithmetic mean
36. Is used in 'mathematical statistics' (alternatively - 'statistical theory') to study the sampling distributions of sample statistics and - more generally - the properties of statistical procedures. The use of any statistical method is valid when the
Type 1 Error
Marginal distribution
Probability
Individual
37. Two variables such that their effects on the response variable cannot be distinguished from each other.
Posterior probability
Variable
The variance of a random variable
Confounded variables
38. Is a function of the known data that is used to estimate an unknown parameter; an estimate is the result from the actual application of the function to a particular set of data. The mean can be used as an estimator.
Estimator
Average and arithmetic mean
Reliable measure
Bias
39. Statistics involve methods of using information from a sample to draw conclusions regarding the population.
Standard error
An experimental study
Lurking variable
Inferential
40. Given two jointly distributed random variables X and Y - the marginal distribution of X is simply the probability distribution of X ignoring information about Y.
Marginal distribution
Correlation coefficient
Alpha value (Level of Significance)
Mutual independence
41. Data are gathered and correlations between predictors and response are investigated.
observational study
Quantitative variable
Treatment
Sampling frame
42. (pdfs) and probability mass functions are denoted by lower case letters - e.g. f(x).
Probability density functions
A Distribution function
Pairwise independence
Probability and statistics
43. Is one that explores the correlation between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case - the researchers would collect o
Parameter
Standard error
Observational study
expected value of X
44. ?
A sample
the population correlation
nominal - ordinal - interval - and ratio
Conditional probability
45. Also called correlation coefficient - is a numeric measure of the strength of linear relationship between two random variables (one can use it to quantify - for example - how shoe size and height are correlated in the population). An example is the P
hypotheses
Pairwise independence
Correlation
Simulation
46. The probability of correctly detecting a false null hypothesis.
Statistical dispersion
Power of a test
A random variable
A probability distribution
47. Is defined as the expected value of random variable (X -
The Covariance between two random variables X and Y - with expected values E(X) =
Descriptive
Statistical dispersion
Trend
48. When there is an even number of values...
Reliable measure
That is the median value
Residuals
Sampling Distribution
49. Are written in corresponding lower case letters. For example x1 - x2 - ... - xn could be a sample corresponding to the random variable X.
Particular realizations of a random variable
Descriptive statistics
Statistical inference
Valid measure
50. Is the probability distribution - under repeated sampling of the population - of a given statistic.
The Covariance between two random variables X and Y - with expected values E(X) =
A sampling distribution
the sample or population mean
Statistical dispersion