SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Interpretation of statistical information in that the assumption is that whatever is proposed as a cause has no effect on the variable being measured can often involve the development of a
Greek letters
Kurtosis
Null hypothesis
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
2. Is a measure of the 'peakedness' of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations - as opposed to frequent modestly sized deviations.
An event
Divide the sum by the number of values.
Kurtosis
Bias
3. To prove the guiding theory further - these predictions are tested as well - as part of the scientific method. If the inference holds true - then the descriptive statistics of the new data increase the soundness of that
Mutual independence
hypothesis
A sampling distribution
That value is the median value
4. A collection of events is mutually independent if for any subset of the collection - the joint probability of all events occurring is equal to the product of the joint probabilities of the individual events. Think of the result of a series of coin-fl
Statistic
Step 1 of a statistical experiment
Mutual independence
Prior probability
5. A group of individuals sharing some common features that might affect the treatment.
Independence or Statistical independence
Conditional distribution
Block
Estimator
6. Working from a null hypothesis two basic forms of error are recognized:
Sampling frame
Average and arithmetic mean
Type I errors & Type II errors
Outlier
7. A subjective estimate of probability.
Statistic
The average - or arithmetic mean
Inferential statistics
Credence
8. Can be - for example - the possible outcomes of a dice roll (but it is not assigned a value). The distribution function of a random variable gives the probability of different results. We can also derive the mean and variance of a random variable.
A random variable
Treatment
Simulation
The sample space
9. Given two jointly distributed random variables X and Y - the marginal distribution of X is simply the probability distribution of X ignoring information about Y.
Binary data
The Covariance between two random variables X and Y - with expected values E(X) =
Law of Parsimony
Marginal distribution
10. Is the probability of an event - ignoring any information about other events. The marginal probability of A is written P(A). Contrast with conditional probability.
descriptive statistics
The Expected value
Marginal probability
A probability distribution
11. A measurement such that the random error is small
Residuals
Reliable measure
Likert scale
An Elementary event
12. A variable that has an important effect on the response variable and the relationship among the variables in a study but is not one of the explanatory variables studied either because it is unknown or not measured.
Individual
variance of X
Independent Selection
Lurking variable
13. Is a process of selecting observations to obtain knowledge about a population. There are many methods to choose on which sample to do the observations.
A Probability measure
Particular realizations of a random variable
inferential statistics
Sampling
14. Where the null hypothesis is falsely rejected giving a 'false positive'.
Type I errors
Step 1 of a statistical experiment
A statistic
Seasonal effect
15. Probability of accepting a false null hypothesis.
Beta value
the population correlation
Placebo effect
A data point
16. The probability of correctly detecting a false null hypothesis.
Seasonal effect
The median value
Confounded variables
Power of a test
17. Two events are independent if the outcome of one does not affect that of the other (for example - getting a 1 on one die roll does not affect the probability of getting a 1 on a second roll). Similarly - when we assert that two random variables are i
Atomic event
The Expected value
the population cumulants
Independence or Statistical independence
18. The collection of all possible outcomes in an experiment.
Block
Sample space
Statistical dispersion
variance of X
19. Two variables such that their effects on the response variable cannot be distinguished from each other.
Confounded variables
Marginal distribution
Atomic event
Step 3 of a statistical experiment
20. Statistics involve methods of organizing - picturing - and summarizing information from samples or population.
Type I errors
Null hypothesis
Descriptive
Parameter
21. Data are gathered and correlations between predictors and response are investigated.
Correlation
Placebo effect
the sample or population mean
observational study
22. Have no meaningful rank order among values.
Particular realizations of a random variable
Nominal measurements
The Expected value
Simpson's Paradox
23. Is the probability of two events occurring together. The joint probability of A and B is written P(A and B) or P(A - B).
Joint probability
A Distribution function
A probability distribution
A statistic
24. Some commonly used symbols for population parameters
An experimental study
the population mean
Probability density
Trend
25. Occurs when a subject receives no treatment - but (incorrectly) believes he or she is in fact receiving treatment and responds favorably.
The variance of a random variable
A population or statistical population
Simulation
Placebo effect
26. In Bayesian inference - this represents prior beliefs or other information that is available before new data or observations are taken into account.
Bias
Prior probability
Parameter - or 'statistical parameter'
Step 1 of a statistical experiment
27. Is its expected value. The mean (or sample mean of a data set is just the average value.
the population correlation
The Mean of a random variable
Ratio measurements
Marginal probability
28. Long-term upward or downward movement over time.
Probability
A data point
Trend
The Expected value
29. The errors - or difference between the estimated response y^i and the actual measured response yi - collectively
Ordinal measurements
covariance of X and Y
A sampling distribution
Residuals
30. A scale that represents an ordinal scale such as looks on a scale from 1 to 10.
Mutual independence
Likert scale
f(z) - and its cdf by F(z).
Statistic
31. A pairwise independent collection of random variables is a set of random variables any two of which are independent.
Type I errors
Step 3 of a statistical experiment
Pairwise independence
Type 2 Error
32. A numerical measure that describes an aspect of a population.
A Distribution function
Parameter
Descriptive
Simulation
33. A variable describes an individual by placing the individual into a category or a group.
Placebo effect
Posterior probability
Qualitative variable
The Covariance between two random variables X and Y - with expected values E(X) =
34. Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol. 4. Further examining the data set in secondary analyses - to suggest new hypotheses for future study. 5. Documenting and present
Type II errors
Bias
Coefficient of determination
Step 3 of a statistical experiment
35. The probability distribution of a sample statistic based on all the possible simple random samples of the same size from a population.
Sampling Distribution
Descriptive
Greek letters
Individual
36. Is data that can take only two values - usually represented by 0 and 1.
Binary data
Standard error
P-value
Probability density
37. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically - sometimes they are grouped together as
That value is the median value
categorical variables
Probability density
Statistical adjustment
38. Is the exact middle value of a set of numbers Arrange the numbers in numerical order. Find the value in the middle of the list.
Binomial experiment
Trend
Atomic event
The median value
39. Statistics involve methods of using information from a sample to draw conclusions regarding the population.
A data set
Kurtosis
Statistic
Inferential
40. Is often denoted by placing a caret over the corresponding symbol - e.g. - pronounced 'theta hat'.
Null hypothesis
Placebo effect
A Random vector
An estimate of a parameter
41. Changes over time that show a regular periodicity in the data where regular means over a fixed interval; the time between repetitions is called the period.
inferential statistics
Law of Large Numbers
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Seasonal effect
42. Given two random variables X and Y - the joint distribution of X and Y is the probability distribution of X and Y together.
Joint distribution
The variance of a random variable
A statistic
Placebo effect
43. Describes a characteristic of an individual to be measured or observed.
Coefficient of determination
Block
Residuals
Variable
44. A common goal for a statistical research project is to investigate causality - and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables or response.
methods of least squares
Independence or Statistical independence
Experimental and observational studies
A sampling distribution
45. Is a measure of the asymmetry of the probability distribution of a real-valued random variable. Roughly speaking - a distribution has positive skew (right-skewed) if the higher tail is longer and negative skew (left-skewed) if the lower tail is longe
Skewness
Probability density
Binomial experiment
variance of X
46. Also called correlation coefficient - is a numeric measure of the strength of linear relationship between two random variables (one can use it to quantify - for example - how shoe size and height are correlated in the population). An example is the P
The median value
Ordinal measurements
Correlation
Standard error
47. Gives the probability of events in a probability space.
the population cumulants
A Probability measure
Correlation coefficient
descriptive statistics
48. S^2
the population variance
Descriptive
Null hypothesis
Cumulative distribution functions
49. Used to reduce bias - this measure weights the more relevant information higher than less relevant info.
Atomic event
Statistical adjustment
Cumulative distribution functions
s-algebras
50. ?
the population correlation
Treatment
the sample or population mean
A Statistical parameter