SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Is its expected value. The mean (or sample mean of a data set is just the average value.
The Mean of a random variable
Likert scale
Individual
Null hypothesis
2. Of a group of numbers is the center point of all those number values.
Pairwise independence
The average - or arithmetic mean
descriptive statistics
Law of Parsimony
3. The proportion of the explained variation by a linear regression model in the total variation.
Pairwise independence
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
Coefficient of determination
Simple random sample
4. Is one that explores the correlation between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case - the researchers would collect o
Observational study
A sampling distribution
Simulation
Interval measurements
5. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically - sometimes they are grouped together as
categorical variables
A data point
Probability density
An Elementary event
6. A measure that is relevant or appropriate as a representation of that property.
Independence or Statistical independence
A data set
experimental studies and observational studies.
Valid measure
7. Have imprecise differences between consecutive values - but have a meaningful order to those values
Conditional probability
Ordinal measurements
Count data
Simpson's Paradox
8. Where the null hypothesis is falsely rejected giving a 'false positive'.
Mutual independence
the population mean
Type I errors
Random variables
9. Is inference about a population from a random sample drawn from it or - more generally - about a random process from its observed behavior during a finite period of time.
A Random vector
Statistical inference
Power of a test
Qualitative variable
10. The probability distribution of a sample statistic based on all the possible simple random samples of the same size from a population.
Joint distribution
Prior probability
Sampling Distribution
the population correlation
11. Cov[X - Y] :
Block
Type 2 Error
covariance of X and Y
Posterior probability
12. Any specific experimental condition applied to the subjects
expected value of X
Treatment
Lurking variable
The standard deviation
13. Two events are independent if the outcome of one does not affect that of the other (for example - getting a 1 on one die roll does not affect the probability of getting a 1 on a second roll). Similarly - when we assert that two random variables are i
Law of Large Numbers
the sample or population mean
Independence or Statistical independence
Null hypothesis
14. E[X] :
Sampling Distribution
Random variables
The Expected value
expected value of X
15. Data are gathered and correlations between predictors and response are investigated.
A Probability measure
observational study
Cumulative distribution functions
Interval measurements
16. Probability of accepting a false null hypothesis.
Residuals
Beta value
f(z) - and its cdf by F(z).
the population cumulants
17. A variable describes an individual by placing the individual into a category or a group.
categorical variables
Atomic event
Correlation
Qualitative variable
18. Is the result of applying a statistical algorithm to a data set. It can also be described as an observable random variable.
Quantitative variable
Skewness
A statistic
Beta value
19. (or just likelihood) is a conditional probability function considered a function of its second argument with its first argument held fixed. For example - imagine pulling a numbered ball with the number k from a bag of n balls - numbered 1 to n. Then
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
A likelihood function
Trend
Nominal measurements
20. Is the probability of an event - ignoring any information about other events. The marginal probability of A is written P(A). Contrast with conditional probability.
Residuals
Dependent Selection
Marginal probability
Type 1 Error
21. The errors - or difference between the estimated response y^i and the actual measured response yi - collectively
Residuals
Probability
A probability space
Bias
22. There are two major types of causal statistical studies: In both types of studies - the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies
experimental studies and observational studies.
A probability density function
Pairwise independence
Outlier
23. In particular - the pdf of the standard normal distribution is denoted by
Standard error
f(z) - and its cdf by F(z).
Step 2 of a statistical experiment
Parameter - or 'statistical parameter'
24. Summarize the population data by describing what was observed in the sample numerically or graphically. Numerical descriptors include mean and standard deviation for continuous data types (like heights or weights) - while frequency and percentage are
Descriptive statistics
Estimator
Sample space
f(z) - and its cdf by F(z).
25. Patterns in the data may be modeled in a way that accounts for randomness and uncertainty in the observations - and are then used for drawing inferences about the process or population being studied; this is called
A likelihood function
A Distribution function
inferential statistics
Reliable measure
26. Is the function that gives the probability distribution of a random variable. It cannot be negative - and its integral on the probability space is equal to 1.
Cumulative distribution functions
A Distribution function
hypotheses
Sampling Distribution
27. Statistics involve methods of organizing - picturing - and summarizing information from samples or population.
Descriptive
Particular realizations of a random variable
Treatment
hypotheses
28. Interpretation of statistical information in that the assumption is that whatever is proposed as a cause has no effect on the variable being measured can often involve the development of a
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
Interval measurements
Probability
Null hypothesis
29. Another name for elementary event.
Atomic event
Step 2 of a statistical experiment
categorical variables
The average - or arithmetic mean
30. To prove the guiding theory further - these predictions are tested as well - as part of the scientific method. If the inference holds true - then the descriptive statistics of the new data increase the soundness of that
Statistical dispersion
Bias
hypothesis
Marginal distribution
31. In number theory - scatter plots of data generated by a distribution function may be transformed with familiar tools used in statistics to reveal underlying patterns - which may then lead to
the population variance
Correlation
hypotheses
Bias
32. The collection of all possible outcomes in an experiment.
categorical variables
Sample space
Simpson's Paradox
Correlation coefficient
33. A scale that represents an ordinal scale such as looks on a scale from 1 to 10.
A sample
Marginal probability
Random variables
Likert scale
34. Is that part of a population which is actually observed.
A sample
Experimental and observational studies
Credence
Skewness
35. Statistical methods can be used for summarizing or describing a collection of data; this is called
descriptive statistics
An estimate of a parameter
The Covariance between two random variables X and Y - with expected values E(X) =
Placebo effect
36. There are four main levels of measurement used in statistics: Each of these have different degrees of usefulness in statistical research.
Type 2 Error
Random variables
Correlation coefficient
nominal - ordinal - interval - and ratio
37. (or expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff ('value'). Thus - it represents the average amount one 'expects' to win per bet if bets with identical odds are re
The Mean of a random variable
A probability density function
A Random vector
The Expected value
38. Ratio and interval measurements which can be either discrete or continuous - due to their numerical nature are grouped together as
Law of Parsimony
variance of X
Sampling
quantitative variables
39. Have both a meaningful zero value and the distances between different measurements defined; they provide the greatest flexibility in statistical methods that can be used for analyzing the data
Standard error
Ratio measurements
Variability
Treatment
40. ?
the population correlation
That value is the median value
Correlation coefficient
The sample space
41. Uses patterns in the sample data to draw inferences about the population represented - accounting for randomness. These inferences may take the form of: answering yes/no questions about the data (hypothesis testing) - estimating numerical characteris
Kurtosis
Inferential statistics
Correlation coefficient
experimental studies and observational studies.
42. S^2
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
the population variance
observational study
Individual
43. Is a sample and the associated data points.
An estimate of a parameter
A probability density function
Estimator
A data set
44. Is the probability of two events occurring together. The joint probability of A and B is written P(A and B) or P(A - B).
Simple random sample
Joint probability
A Random vector
s-algebras
45. Rejecting a true null hypothesis.
Correlation
Type 1 Error
A probability density function
Random variables
46. Given two random variables X and Y - the joint distribution of X and Y is the probability distribution of X and Y together.
An Elementary event
The Mean of a random variable
Sampling Distribution
Joint distribution
47. The probability of correctly detecting a false null hypothesis.
Power of a test
Parameter - or 'statistical parameter'
variance of X
Variable
48. The probability of the observed value or something more extreme under the assumption that the null hypothesis is true.
Estimator
Quantitative variable
A probability distribution
P-value
49. Is a function of the known data that is used to estimate an unknown parameter; an estimate is the result from the actual application of the function to a particular set of data. The mean can be used as an estimator.
An Elementary event
Descriptive statistics
Placebo effect
Estimator
50. Many statistical methods seek to minimize the mean-squared error - and these are called
methods of least squares
Conditional probability
Seasonal effect
Quantitative variable