SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Error also refers to the extent to which individual observations in a sample differ from a central value - such as
The average - or arithmetic mean
A statistic
the sample or population mean
descriptive statistics
2. There are two major types of causal statistical studies: In both types of studies - the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies
Inferential statistics
The Covariance between two random variables X and Y - with expected values E(X) =
A Probability measure
experimental studies and observational studies.
3. (also called statistical variability) is a measure of how diverse some data is. It can be expressed by the variance or the standard deviation.
Statistical dispersion
Independence or Statistical independence
the sample or population mean
Standard error
4. Two events are independent if the outcome of one does not affect that of the other (for example - getting a 1 on one die roll does not affect the probability of getting a 1 on a second roll). Similarly - when we assert that two random variables are i
Binomial experiment
Block
Average and arithmetic mean
Independence or Statistical independence
5. ?
the population correlation
Variability
Confounded variables
Joint distribution
6. Is that part of a population which is actually observed.
A sample
Standard error
Outlier
Correlation coefficient
7. A sample selected in such a way that each individual is equally likely to be selected as well as any group of size n is equally likely to be selected.
Treatment
Simple random sample
methods of least squares
Conditional probability
8. A consistent - repeated deviation of the sample statistic from the population parameter in the same direction when many samples are taken.
Bias
Interval measurements
Conditional probability
expected value of X
9. Have meaningful distances between measurements defined - but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit)
Statistical adjustment
Alpha value (Level of Significance)
Interval measurements
Step 2 of a statistical experiment
10. Is the function that gives the probability distribution of a random variable. It cannot be negative - and its integral on the probability space is equal to 1.
A Distribution function
Inferential
the population cumulants
Power of a test
11. Of a group of numbers is the center point of all those number values.
Credence
The average - or arithmetic mean
the population mean
Sampling Distribution
12. Cov[X - Y] :
Inferential
P-value
covariance of X and Y
Simpson's Paradox
13. Descriptive statistics and inferential statistics (a.k.a. - predictive statistics) together comprise
Parameter - or 'statistical parameter'
applied statistics
Ratio measurements
variance of X
14. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically - sometimes they are grouped together as
categorical variables
Step 1 of a statistical experiment
A statistic
The Mean of a random variable
15. Gives the probability distribution for a continuous random variable.
the population mean
A probability density function
Simpson's Paradox
Nominal measurements
16. The probability of the observed value or something more extreme under the assumption that the null hypothesis is true.
An estimate of a parameter
Type I errors
P-value
Reliable measure
17. The collection of all possible outcomes in an experiment.
A sampling distribution
A Probability measure
Beta value
Sample space
18. Where the null hypothesis fails to be rejected and an actual difference between populations is missed giving a 'false negative'.
Step 1 of a statistical experiment
Power of a test
Type II errors
Law of Parsimony
19. Is the result of applying a statistical algorithm to a data set. It can also be described as an observable random variable.
Count data
A statistic
A Distribution function
the sample or population mean
20. In Bayesian inference - this represents prior beliefs or other information that is available before new data or observations are taken into account.
f(z) - and its cdf by F(z).
Prior probability
Sampling Distribution
Mutual independence
21. Any specific experimental condition applied to the subjects
Parameter
Treatment
Divide the sum by the number of values.
Step 1 of a statistical experiment
22. Is one that explores the correlation between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case - the researchers would collect o
Outlier
inferential statistics
An experimental study
Observational study
23. Probability of accepting a false null hypothesis.
Beta value
A Distribution function
Prior probability
the population cumulants
24. Patterns in the data may be modeled in a way that accounts for randomness and uncertainty in the observations - and are then used for drawing inferences about the process or population being studied; this is called
A data point
Statistical adjustment
inferential statistics
Nominal measurements
25. S^2
That is the median value
the population variance
A Probability measure
Parameter
26. Is a subset of the sample space - to which a probability can be assigned. For example - on rolling a die - 'getting a five or a six' is an event (with a probability of one third if the die is fair).
Residuals
An event
Nominal measurements
Probability and statistics
27. Are two related but separate academic disciplines. Statistical analysis often uses probability distributions - and the two topics are often studied together. However - probability theory contains much that is of mostly of mathematical interest and no
Probability and statistics
Cumulative distribution functions
the population variance
Seasonal effect
28. Is the most commonly used measure of statistical dispersion. It is the square root of the variance - and is generally written s (sigma).
Type I errors & Type II errors
Step 1 of a statistical experiment
inferential statistics
The standard deviation
29. Can be a population parameter - a distribution parameter - an unobserved parameter (with different shades of meaning). In statistics - this is often a quantity to be estimated.
Warning
: Invalid argument supplied for foreach() in
/var/www/html/basicversity.com/show_quiz.php
on line
183
30. Is a typed measurement - it can be a boolean value - a real number - a vector (in which case it's also called a data vector) - etc.
Simpson's Paradox
nominal - ordinal - interval - and ratio
A data point
Sample space
31. Is a process of selecting observations to obtain knowledge about a population. There are many methods to choose on which sample to do the observations.
Sampling
An Elementary event
inferential statistics
Standard error
32. To find the average - or arithmetic mean - of a set of numbers:
The standard deviation
Divide the sum by the number of values.
Binomial experiment
Marginal distribution
33. Consists of a number of independent trials repeated under identical conditions. On each trial - there are two possible outcomes.
Independence or Statistical independence
A data point
Binomial experiment
Correlation coefficient
34. Some commonly used symbols for sample statistics
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Kurtosis
inferential statistics
The standard deviation
35. In particular - the pdf of the standard normal distribution is denoted by
the population mean
Step 1 of a statistical experiment
f(z) - and its cdf by F(z).
Random variables
36.
expected value of X
A statistic
the population mean
Descriptive
37. Have no meaningful rank order among values.
Nominal measurements
A data point
A statistic
The Range
38. Working from a null hypothesis two basic forms of error are recognized:
Prior probability
Statistics
Type I errors & Type II errors
hypothesis
39. To prove the guiding theory further - these predictions are tested as well - as part of the scientific method. If the inference holds true - then the descriptive statistics of the new data increase the soundness of that
That is the median value
Step 2 of a statistical experiment
Correlation
hypothesis
40. Given two jointly distributed random variables X and Y - the conditional probability distribution of Y given X (written 'Y | X') is the probability distribution of Y when X is known to be a particular value.
Independence or Statistical independence
Conditional distribution
Inferential
The standard deviation
41. A pairwise independent collection of random variables is a set of random variables any two of which are independent.
the population mean
Pairwise independence
the population variance
Probability density
42. Is a measure of the 'peakedness' of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations - as opposed to frequent modestly sized deviations.
Simulation
Marginal probability
f(z) - and its cdf by F(z).
Kurtosis
43. Is a sample and the associated data points.
The standard deviation
Block
the population variance
A data set
44. The errors - or difference between the estimated response y^i and the actual measured response yi - collectively
Residuals
Step 1 of a statistical experiment
the population correlation
A sample
45. Is a measure of its statistical dispersion - indicating how far from the expected value its values typically are. The variance of random variable X is typically designated as - - or simply s2.
The variance of a random variable
Reliable measure
Independent Selection
covariance of X and Y
46. Is data that can take only two values - usually represented by 0 and 1.
hypotheses
Binary data
applied statistics
nominal - ordinal - interval - and ratio
47. Is a sample space over which a probability measure has been defined.
A probability space
hypotheses
Pairwise independence
Law of Large Numbers
48. Also called correlation coefficient - is a numeric measure of the strength of linear relationship between two random variables (one can use it to quantify - for example - how shoe size and height are correlated in the population). An example is the P
Type II errors
The Covariance between two random variables X and Y - with expected values E(X) =
Confounded variables
Correlation
49. The objects described by a set of data: person (animal) - place - and - thing. (SUBJECTS)
Individual
A Probability measure
Step 1 of a statistical experiment
Bias
50. Gives the probability of events in a probability space.
The variance of a random variable
A Statistical parameter
A Probability measure
An Elementary event