SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. When info. in a contingency table is re-organized into more or less categories - relationships seen can change or reverse.
Warning
: Invalid argument supplied for foreach() in
/var/www/html/basicversity.com/show_quiz.php
on line
183
2. Rejecting a true null hypothesis.
Type 1 Error
categorical variables
Inferential statistics
A probability distribution
3. Is that part of a population which is actually observed.
Type 1 Error
A sample
variance of X
Quantitative variable
4. Is one that explores the correlation between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case - the researchers would collect o
Observational study
Variability
Estimator
Correlation coefficient
5. Is defined as the expected value of random variable (X -
The Covariance between two random variables X and Y - with expected values E(X) =
Binomial experiment
Marginal probability
Confounded variables
6. Planning the research - including finding the number of replicates of the study - using the following information: preliminary estimates regarding the size of treatment effects - alternative hypotheses - and the estimated experimental variability. Co
Step 1 of a statistical experiment
Placebo effect
Particular realizations of a random variable
Divide the sum by the number of values.
7. Can be - for example - the possible outcomes of a dice roll (but it is not assigned a value). The distribution function of a random variable gives the probability of different results. We can also derive the mean and variance of a random variable.
A random variable
covariance of X and Y
Lurking variable
A statistic
8. ?r
Block
the population cumulants
nominal - ordinal - interval - and ratio
Probability density
9. Where the null hypothesis fails to be rejected and an actual difference between populations is missed giving a 'false negative'.
applied statistics
quantitative variables
Bias
Type II errors
10. Changes over time that show a regular periodicity in the data where regular means over a fixed interval; the time between repetitions is called the period.
Sampling frame
Sampling
Seasonal effect
An experimental study
11. Some commonly used symbols for sample statistics
Power of a test
Mutual independence
Inferential statistics
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
12. S^2
A Statistical parameter
the population variance
the population correlation
Variability
13. A collection of events is mutually independent if for any subset of the collection - the joint probability of all events occurring is equal to the product of the joint probabilities of the individual events. Think of the result of a series of coin-fl
Simpson's Paradox
Type I errors
Standard error
Mutual independence
14. Is a parameter that indexes a family of probability distributions.
Variable
the population variance
A Statistical parameter
methods of least squares
15. Cov[X - Y] :
Bias
covariance of X and Y
Parameter
Trend
16. Two events are independent if the outcome of one does not affect that of the other (for example - getting a 1 on one die roll does not affect the probability of getting a 1 on a second roll). Similarly - when we assert that two random variables are i
Independence or Statistical independence
A probability distribution
Dependent Selection
Nominal measurements
17. Interpretation of statistical information in that the assumption is that whatever is proposed as a cause has no effect on the variable being measured can often involve the development of a
The Mean of a random variable
the population variance
A Probability measure
Null hypothesis
18. Samples are drawn from two different populations such that there is a matching of the first sample data drawn and a corresponding data value in the second sample data.
The standard deviation
quantitative variables
Valid measure
Dependent Selection
19. Is used to describe probability in a continuous probability distribution. For example - you can't say that the probability of a man being six feet tall is 20% - but you can say he has 20% of chances of being between five and six feet tall. Probabilit
A data set
Nominal measurements
Ratio measurements
Probability density
20. Working from a null hypothesis two basic forms of error are recognized:
s-algebras
Statistical inference
Kurtosis
Type I errors & Type II errors
21. (or multivariate random variable) is a vector whose components are random variables on the same probability space.
P-value
A data set
Probability
A Random vector
22. Gives the probability of events in a probability space.
Descriptive statistics
Step 1 of a statistical experiment
A Probability measure
Cumulative distribution functions
23.
the population mean
A probability distribution
An estimate of a parameter
Variability
24. Gives the probability distribution for a continuous random variable.
The standard deviation
A probability density function
the sample or population mean
Independence or Statistical independence
25. The result of a Bayesian analysis that encapsulates the combination of prior beliefs or information with observed data
Bias
Confounded variables
Posterior probability
Nominal measurements
26. In Bayesian inference - this represents prior beliefs or other information that is available before new data or observations are taken into account.
f(z) - and its cdf by F(z).
Inferential statistics
Type 1 Error
Prior probability
27. Is a measure of the asymmetry of the probability distribution of a real-valued random variable. Roughly speaking - a distribution has positive skew (right-skewed) if the higher tail is longer and negative skew (left-skewed) if the lower tail is longe
Skewness
Mutual independence
Alpha value (Level of Significance)
Correlation
28. (or just likelihood) is a conditional probability function considered a function of its second argument with its first argument held fixed. For example - imagine pulling a numbered ball with the number k from a bag of n balls - numbered 1 to n. Then
A likelihood function
Type 1 Error
Step 1 of a statistical experiment
Credence
29. Is inference about a population from a random sample drawn from it or - more generally - about a random process from its observed behavior during a finite period of time.
Ratio measurements
Probability density
Statistical inference
A Random vector
30. (or expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff ('value'). Thus - it represents the average amount one 'expects' to win per bet if bets with identical odds are re
The Expected value
nominal - ordinal - interval - and ratio
Prior probability
the population variance
31. Given two jointly distributed random variables X and Y - the marginal distribution of X is simply the probability distribution of X ignoring information about Y.
Divide the sum by the number of values.
variance of X
An event
Marginal distribution
32. Can refer either to a sample not being representative of the population - or to the difference between the expected value of an estimator and the true value.
A Random vector
A data set
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
Bias
33. Another name for elementary event.
Statistical dispersion
Atomic event
Descriptive
That is the median value
34. Is a process of selecting observations to obtain knowledge about a population. There are many methods to choose on which sample to do the observations.
Qualitative variable
Law of Parsimony
Step 2 of a statistical experiment
Sampling
35. Samples are drawn from two different populations such that the sample data drawn from one population is completely unrelated to the selection of sample data from the other population.
A data point
quantitative variables
Descriptive
Independent Selection
36. Is data arising from counting that can take only non-negative integer values.
Count data
quantitative variables
A probability density function
An experimental study
37. When you have two or more competing models - choose the simpler of the two models.
Law of Parsimony
Bias
Statistical dispersion
Law of Large Numbers
38. Two variables such that their effects on the response variable cannot be distinguished from each other.
Independent Selection
Standard error
Confounded variables
Inferential
39. Is used in 'mathematical statistics' (alternatively - 'statistical theory') to study the sampling distributions of sample statistics and - more generally - the properties of statistical procedures. The use of any statistical method is valid when the
Probability
Estimator
Residuals
Count data
40. In number theory - scatter plots of data generated by a distribution function may be transformed with familiar tools used in statistics to reveal underlying patterns - which may then lead to
applied statistics
hypotheses
Kurtosis
Simulation
41. Uses patterns in the sample data to draw inferences about the population represented - accounting for randomness. These inferences may take the form of: answering yes/no questions about the data (hypothesis testing) - estimating numerical characteris
Inferential statistics
Type I errors
Lurking variable
The variance of a random variable
42. Probability of rejecting a true null hypothesis.
Prior probability
nominal - ordinal - interval - and ratio
Alpha value (Level of Significance)
An estimate of a parameter
43. Are simply two different terms for the same thing. Add the given values
Lurking variable
An estimate of a parameter
Seasonal effect
Average and arithmetic mean
44. There are four main levels of measurement used in statistics: Each of these have different degrees of usefulness in statistical research.
descriptive statistics
Skewness
The Range
nominal - ordinal - interval - and ratio
45. Have both a meaningful zero value and the distances between different measurements defined; they provide the greatest flexibility in statistical methods that can be used for analyzing the data
Trend
Average and arithmetic mean
A likelihood function
Ratio measurements
46. Where the null hypothesis is falsely rejected giving a 'false positive'.
Credence
Type I errors
Trend
Variable
47. Failing to reject a false null hypothesis.
Ratio measurements
Confounded variables
Type 2 Error
Probability density
48. (or atomic event) is an event with only one element. For example - when pulling a card out of a deck - 'getting the jack of spades' is an elementary event - while 'getting a king or an ace' is not.
A data point
The Covariance between two random variables X and Y - with expected values E(X) =
An Elementary event
Probability density functions
49. The proportion of the explained variation by a linear regression model in the total variation.
Coefficient of determination
The Range
s-algebras
Confounded variables
50. Is data that can take only two values - usually represented by 0 and 1.
Random variables
An experimental study
Binary data
Posterior probability