SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Gives the probability distribution for a continuous random variable.
observational study
Divide the sum by the number of values.
Block
A probability density function
2. A consistent - repeated deviation of the sample statistic from the population parameter in the same direction when many samples are taken.
Dependent Selection
Type I errors & Type II errors
Bias
Valid measure
3. The probability of correctly detecting a false null hypothesis.
Binary data
Power of a test
Statistical dispersion
An event
4. E[X] :
Ordinal measurements
A population or statistical population
expected value of X
Variable
5. Samples are drawn from two different populations such that the sample data drawn from one population is completely unrelated to the selection of sample data from the other population.
Confounded variables
Independent Selection
the population mean
Block
6. In the long run - as the sample size increases - the relative frequencies of outcomes approach to the theoretical probability.
Law of Large Numbers
Law of Parsimony
Sampling frame
The standard deviation
7. A collection of events is mutually independent if for any subset of the collection - the joint probability of all events occurring is equal to the product of the joint probabilities of the individual events. Think of the result of a series of coin-fl
quantitative variables
Mutual independence
Qualitative variable
Bias
8. When you have two or more competing models - choose the simpler of the two models.
An estimate of a parameter
methods of least squares
Null hypothesis
Law of Parsimony
9. Patterns in the data may be modeled in a way that accounts for randomness and uncertainty in the observations - and are then used for drawing inferences about the process or population being studied; this is called
covariance of X and Y
Inferential
inferential statistics
A Random vector
10. Design of experiments - using blocking to reduce the influence of confounding variables - and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage - the experimenters a
A population or statistical population
Step 2 of a statistical experiment
Conditional probability
Simpson's Paradox
11. Is a function of the known data that is used to estimate an unknown parameter; an estimate is the result from the actual application of the function to a particular set of data. The mean can be used as an estimator.
Estimator
Average and arithmetic mean
Joint probability
Pairwise independence
12. Are written in corresponding lower case letters. For example x1 - x2 - ... - xn could be a sample corresponding to the random variable X.
That value is the median value
Statistic
Particular realizations of a random variable
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
13. Samples are drawn from two different populations such that there is a matching of the first sample data drawn and a corresponding data value in the second sample data.
The Covariance between two random variables X and Y - with expected values E(X) =
Dependent Selection
Sampling
the sample or population mean
14. A variable describes an individual by placing the individual into a category or a group.
Correlation
Joint probability
Qualitative variable
the population variance
15. Error also refers to the extent to which individual observations in a sample differ from a central value - such as
Nominal measurements
Step 1 of a statistical experiment
the sample or population mean
Simpson's Paradox
16. There are four main levels of measurement used in statistics: Each of these have different degrees of usefulness in statistical research.
P-value
A Probability measure
Simpson's Paradox
nominal - ordinal - interval - and ratio
17. A numerical measure that describes an aspect of a population.
Simple random sample
The Expected value
Random variables
Parameter
18. Is defined as the expected value of random variable (X -
Type 2 Error
Quantitative variable
The Covariance between two random variables X and Y - with expected values E(X) =
Bias
19. Is a sample and the associated data points.
The sample space
Type II errors
A data set
A random variable
20. Used to reduce bias - this measure weights the more relevant information higher than less relevant info.
Qualitative variable
applied statistics
Average and arithmetic mean
Statistical adjustment
21. Have both a meaningful zero value and the distances between different measurements defined; they provide the greatest flexibility in statistical methods that can be used for analyzing the data
A probability distribution
Probability density
Law of Parsimony
Ratio measurements
22. Is one that explores the correlation between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case - the researchers would collect o
Probability density functions
Probability and statistics
Step 3 of a statistical experiment
Observational study
23. Can be - for example - the possible outcomes of a dice roll (but it is not assigned a value). The distribution function of a random variable gives the probability of different results. We can also derive the mean and variance of a random variable.
Placebo effect
A random variable
Descriptive statistics
Beta value
24. A list of individuals from which the sample is actually selected.
Coefficient of determination
Sampling frame
Conditional distribution
variance of X
25. Is the exact middle value of a set of numbers Arrange the numbers in numerical order. Find the value in the middle of the list.
Probability
Ratio measurements
The median value
An Elementary event
26. The probability of the observed value or something more extreme under the assumption that the null hypothesis is true.
Step 2 of a statistical experiment
P-value
Ordinal measurements
Nominal measurements
27. Statistics involve methods of organizing - picturing - and summarizing information from samples or population.
Descriptive
covariance of X and Y
inferential statistics
expected value of X
28. Many statistical methods seek to minimize the mean-squared error - and these are called
methods of least squares
Random variables
f(z) - and its cdf by F(z).
Coefficient of determination
29. Is the result of applying a statistical algorithm to a data set. It can also be described as an observable random variable.
A statistic
The Covariance between two random variables X and Y - with expected values E(X) =
Atomic event
Variable
30. Given two jointly distributed random variables X and Y - the conditional probability distribution of Y given X (written 'Y | X') is the probability distribution of Y when X is known to be a particular value.
Binomial experiment
Type I errors
the population cumulants
Conditional distribution
31. Is used to describe probability in a continuous probability distribution. For example - you can't say that the probability of a man being six feet tall is 20% - but you can say he has 20% of chances of being between five and six feet tall. Probabilit
Quantitative variable
Block
Probability density
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
32. Is a function that gives the probability of all elements in a given space: see List of probability distributions
Ordinal measurements
A probability distribution
A Random vector
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
33. In Bayesian inference - this represents prior beliefs or other information that is available before new data or observations are taken into account.
Prior probability
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Alpha value (Level of Significance)
inferential statistics
34. A data value that falls outside the overall pattern of the graph.
A data set
Outlier
Power of a test
A population or statistical population
35. (pdfs) and probability mass functions are denoted by lower case letters - e.g. f(x).
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
An estimate of a parameter
Probability density functions
f(z) - and its cdf by F(z).
36. (or just likelihood) is a conditional probability function considered a function of its second argument with its first argument held fixed. For example - imagine pulling a numbered ball with the number k from a bag of n balls - numbered 1 to n. Then
Seasonal effect
Statistics
Individual
A likelihood function
37. Is the probability distribution - under repeated sampling of the population - of a given statistic.
P-value
A sampling distribution
Probability density
Cumulative distribution functions
38. Working from a null hypothesis two basic forms of error are recognized:
The Mean of a random variable
Posterior probability
Probability and statistics
Type I errors & Type II errors
39. (also called statistical variability) is a measure of how diverse some data is. It can be expressed by the variance or the standard deviation.
A sampling distribution
Inferential statistics
Qualitative variable
Statistical dispersion
40. A numerical measure that assesses the strength of a linear relationship between two variables.
Correlation coefficient
Quantitative variable
The Mean of a random variable
Independence or Statistical independence
41. When there is an even number of values...
Independent Selection
Prior probability
That is the median value
Type I errors
42. Is the probability of some event A - assuming event B. Conditional probability is written P(A|B) - and is read 'the probability of A - given B'
the population mean
A probability distribution
Conditional probability
Treatment
43. Descriptive statistics and inferential statistics (a.k.a. - predictive statistics) together comprise
s-algebras
Ratio measurements
applied statistics
Law of Parsimony
44. Gives the probability of events in a probability space.
experimental studies and observational studies.
the population cumulants
Probability and statistics
A Probability measure
45. Two variables such that their effects on the response variable cannot be distinguished from each other.
Simulation
A data point
Mutual independence
Confounded variables
46. Is the length of the smallest interval which contains all the data.
That value is the median value
Pairwise independence
the population variance
The Range
47. (cdfs) are denoted by upper case letters - e.g. F(x).
Likert scale
Cumulative distribution functions
Variable
Probability density
48. Is a process of selecting observations to obtain knowledge about a population. There are many methods to choose on which sample to do the observations.
Sampling
Divide the sum by the number of values.
Estimator
the population correlation
49. Is data arising from counting that can take only non-negative integer values.
The variance of a random variable
Probability density functions
Count data
Parameter
50. A numerical measure that describes an aspect of a sample.
Ordinal measurements
Experimental and observational studies
Statistic
Correlation coefficient