SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A numerical measure that describes an aspect of a sample.
Statistic
The Covariance between two random variables X and Y - with expected values E(X) =
Law of Large Numbers
Coefficient of determination
2. Is denoted by - pronounced 'x bar'.
The Covariance between two random variables X and Y - with expected values E(X) =
Greek letters
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
A probability distribution
3. When there is an even number of values...
Statistical adjustment
Residuals
That is the median value
Type I errors
4. Is data that can take only two values - usually represented by 0 and 1.
Binary data
variance of X
Average and arithmetic mean
Correlation
5. A group of individuals sharing some common features that might affect the treatment.
hypothesis
Residuals
Trend
Block
6. There are four main levels of measurement used in statistics: Each of these have different degrees of usefulness in statistical research.
Valid measure
nominal - ordinal - interval - and ratio
Statistical adjustment
An Elementary event
7. Is the length of the smallest interval which contains all the data.
inferential statistics
Variable
A Distribution function
The Range
8. A scale that represents an ordinal scale such as looks on a scale from 1 to 10.
Cumulative distribution functions
Likert scale
variance of X
A Random vector
9. Is that part of a population which is actually observed.
f(z) - and its cdf by F(z).
A population or statistical population
A sample
Reliable measure
10. The probability of the observed value or something more extreme under the assumption that the null hypothesis is true.
P-value
Skewness
the population cumulants
Treatment
11. Used to reduce bias - this measure weights the more relevant information higher than less relevant info.
Type 1 Error
Statistical adjustment
A probability distribution
Quantitative variable
12. Any specific experimental condition applied to the subjects
the population mean
covariance of X and Y
Treatment
Alpha value (Level of Significance)
13. Given two random variables X and Y - the joint distribution of X and Y is the probability distribution of X and Y together.
Joint distribution
Correlation coefficient
Simple random sample
Step 1 of a statistical experiment
14. Can be a population parameter - a distribution parameter - an unobserved parameter (with different shades of meaning). In statistics - this is often a quantity to be estimated.
Warning
: Invalid argument supplied for foreach() in
/var/www/html/basicversity.com/show_quiz.php
on line
183
15. The result of a Bayesian analysis that encapsulates the combination of prior beliefs or information with observed data
Quantitative variable
Experimental and observational studies
Conditional probability
Posterior probability
16. Two events are independent if the outcome of one does not affect that of the other (for example - getting a 1 on one die roll does not affect the probability of getting a 1 on a second roll). Similarly - when we assert that two random variables are i
Posterior probability
Qualitative variable
Independence or Statistical independence
Block
17. Also called correlation coefficient - is a numeric measure of the strength of linear relationship between two random variables (one can use it to quantify - for example - how shoe size and height are correlated in the population). An example is the P
Beta value
Binomial experiment
Correlation
Simpson's Paradox
18. Is the set of possible outcomes of an experiment. For example - the sample space for rolling a six-sided die will be {1 - 2 - 3 - 4 - 5 - 6}.
Greek letters
hypothesis
The sample space
Valid measure
19. Is a function that gives the probability of all elements in a given space: see List of probability distributions
f(z) - and its cdf by F(z).
Type 1 Error
A probability distribution
Cumulative distribution functions
20. Some commonly used symbols for sample statistics
Law of Parsimony
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Type 2 Error
Individual
21. Describes the spread in the values of the sample statistic when many samples are taken.
Block
Ordinal measurements
Variability
Step 1 of a statistical experiment
22. Is a subset of the sample space - to which a probability can be assigned. For example - on rolling a die - 'getting a five or a six' is an event (with a probability of one third if the die is fair).
Observational study
An event
Inferential statistics
An estimate of a parameter
23. Are two related but separate academic disciplines. Statistical analysis often uses probability distributions - and the two topics are often studied together. However - probability theory contains much that is of mostly of mathematical interest and no
Probability and statistics
Sampling frame
inferential statistics
Type 2 Error
24. Are usually written with upper case calligraphic (e.g. F for the set of sets on which we define the probability P)
A data point
The median value
s-algebras
Sampling
25. Is a sample and the associated data points.
expected value of X
A data set
Simpson's Paradox
A probability distribution
26. Is a sample space over which a probability measure has been defined.
Inferential statistics
A probability space
Step 1 of a statistical experiment
Simple random sample
27. Var[X] :
Probability
variance of X
Probability density
An Elementary event
28. Is the probability of two events occurring together. The joint probability of A and B is written P(A and B) or P(A - B).
Particular realizations of a random variable
An Elementary event
Statistic
Joint probability
29. Is the result of applying a statistical algorithm to a data set. It can also be described as an observable random variable.
Skewness
Placebo effect
A statistic
the sample or population mean
30. A variable has a value or numerical measurement for which operations such as addition or averaging make sense.
Observational study
Standard error
Simulation
Quantitative variable
31. Given two jointly distributed random variables X and Y - the conditional probability distribution of Y given X (written 'Y | X') is the probability distribution of Y when X is known to be a particular value.
Likert scale
Conditional distribution
Probability and statistics
A Distribution function
32. (pdfs) and probability mass functions are denoted by lower case letters - e.g. f(x).
Ratio measurements
Power of a test
Simpson's Paradox
Probability density functions
33. (or atomic event) is an event with only one element. For example - when pulling a card out of a deck - 'getting the jack of spades' is an elementary event - while 'getting a king or an ace' is not.
Inferential
Credence
Random variables
An Elementary event
34. Is defined as the expected value of random variable (X -
A probability space
The Covariance between two random variables X and Y - with expected values E(X) =
categorical variables
Simulation
35. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically - sometimes they are grouped together as
Independent Selection
Count data
Probability density functions
categorical variables
36. Occurs when a subject receives no treatment - but (incorrectly) believes he or she is in fact receiving treatment and responds favorably.
Placebo effect
Descriptive
Cumulative distribution functions
Quantitative variable
37. Working from a null hypothesis two basic forms of error are recognized:
methods of least squares
categorical variables
Type I errors & Type II errors
Valid measure
38. Some commonly used symbols for population parameters
Ratio measurements
the population mean
Type 2 Error
A Random vector
39. Is a measure of its statistical dispersion - indicating how far from the expected value its values typically are. The variance of random variable X is typically designated as - - or simply s2.
Trend
Qualitative variable
The variance of a random variable
Lurking variable
40. Cov[X - Y] :
An estimate of a parameter
Skewness
Statistical adjustment
covariance of X and Y
41. Can refer either to a sample not being representative of the population - or to the difference between the expected value of an estimator and the true value.
Inferential statistics
Prior probability
Bias
hypothesis
42. Gives the probability of events in a probability space.
Posterior probability
A Probability measure
Simulation
A random variable
43. To find the median value of a set of numbers: Arrange the numbers in numerical order. Locate the two middle numbers in the list. Find the average of those two middle values.
Probability and statistics
That value is the median value
Prior probability
Probability density
44. (or just likelihood) is a conditional probability function considered a function of its second argument with its first argument held fixed. For example - imagine pulling a numbered ball with the number k from a bag of n balls - numbered 1 to n. Then
The median value
Residuals
covariance of X and Y
A likelihood function
45. When you have two or more competing models - choose the simpler of the two models.
Binomial experiment
A probability space
Law of Parsimony
nominal - ordinal - interval - and ratio
46. A measurement such that the random error is small
the population mean
Cumulative distribution functions
Lurking variable
Reliable measure
47. A consistent - repeated deviation of the sample statistic from the population parameter in the same direction when many samples are taken.
s-algebras
Valid measure
Bias
P-value
48. Are usually written in upper case roman letters: X - Y - etc.
Credence
Law of Parsimony
Random variables
A data point
49. Samples are drawn from two different populations such that the sample data drawn from one population is completely unrelated to the selection of sample data from the other population.
Residuals
Independent Selection
That is the median value
the population correlation
50. Where the null hypothesis is falsely rejected giving a 'false positive'.
Count data
quantitative variables
Variability
Type I errors