SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Is the exact middle value of a set of numbers Arrange the numbers in numerical order. Find the value in the middle of the list.
Experimental and observational studies
Law of Large Numbers
Standard error
The median value
2. A scale that represents an ordinal scale such as looks on a scale from 1 to 10.
The average - or arithmetic mean
The Range
Greek letters
Likert scale
3. Cov[X - Y] :
Bias
A probability density function
covariance of X and Y
Random variables
4. Is the probability of two events occurring together. The joint probability of A and B is written P(A and B) or P(A - B).
Joint probability
methods of least squares
Statistical adjustment
Quantitative variable
5. A measurement such that the random error is small
Reliable measure
Type 2 Error
Type I errors & Type II errors
the population mean
6. Is a set of entities about which statistical inferences are to be drawn - often based on random sampling. One can also talk about a population of measurements or values.
A population or statistical population
Trend
Block
A likelihood function
7. Is a measure of the asymmetry of the probability distribution of a real-valued random variable. Roughly speaking - a distribution has positive skew (right-skewed) if the higher tail is longer and negative skew (left-skewed) if the lower tail is longe
Type I errors
Probability and statistics
Skewness
Correlation
8. Many statistical methods seek to minimize the mean-squared error - and these are called
methods of least squares
Kurtosis
Joint distribution
Correlation
9. (or multivariate random variable) is a vector whose components are random variables on the same probability space.
Prior probability
Valid measure
Simple random sample
A Random vector
10. Planning the research - including finding the number of replicates of the study - using the following information: preliminary estimates regarding the size of treatment effects - alternative hypotheses - and the estimated experimental variability. Co
Correlation coefficient
methods of least squares
Step 1 of a statistical experiment
Prior probability
11. S^2
Correlation
The average - or arithmetic mean
the population variance
Average and arithmetic mean
12. Is the probability of some event A - assuming event B. Conditional probability is written P(A|B) - and is read 'the probability of A - given B'
Conditional probability
Power of a test
The Range
Atomic event
13. Error also refers to the extent to which individual observations in a sample differ from a central value - such as
That value is the median value
An estimate of a parameter
the sample or population mean
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
14. Two variables such that their effects on the response variable cannot be distinguished from each other.
Estimator
That value is the median value
Confounded variables
Joint probability
15. Some commonly used symbols for sample statistics
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
The Expected value
descriptive statistics
The Range
16. (or atomic event) is an event with only one element. For example - when pulling a card out of a deck - 'getting the jack of spades' is an elementary event - while 'getting a king or an ace' is not.
Descriptive
An Elementary event
Estimator
P-value
17. A pairwise independent collection of random variables is a set of random variables any two of which are independent.
The Expected value
Lurking variable
Pairwise independence
Binomial experiment
18. When you have two or more competing models - choose the simpler of the two models.
inferential statistics
Marginal distribution
A sampling distribution
Law of Parsimony
19. Have meaningful distances between measurements defined - but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit)
Count data
Descriptive
Interval measurements
Greek letters
20. Is denoted by - pronounced 'x bar'.
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
A data set
Coefficient of determination
P-value
21. ?
Divide the sum by the number of values.
the population correlation
Coefficient of determination
Ordinal measurements
22. Two events are independent if the outcome of one does not affect that of the other (for example - getting a 1 on one die roll does not affect the probability of getting a 1 on a second roll). Similarly - when we assert that two random variables are i
s-algebras
Independence or Statistical independence
Prior probability
Experimental and observational studies
23. Is data arising from counting that can take only non-negative integer values.
Count data
An Elementary event
variance of X
nominal - ordinal - interval - and ratio
24. Is a subset of the sample space - to which a probability can be assigned. For example - on rolling a die - 'getting a five or a six' is an event (with a probability of one third if the die is fair).
P-value
Sample space
A Distribution function
An event
25. Is a measure of the 'peakedness' of the probability distribution of a real-valued random variable. Higher kurtosis means more of the variance is due to infrequent extreme deviations - as opposed to frequent modestly sized deviations.
Parameter
That value is the median value
Kurtosis
Prior probability
26. Is the most commonly used measure of statistical dispersion. It is the square root of the variance - and is generally written s (sigma).
Count data
P-value
Qualitative variable
The standard deviation
27. A sample selected in such a way that each individual is equally likely to be selected as well as any group of size n is equally likely to be selected.
applied statistics
A sampling distribution
Simple random sample
Bias
28. Are usually written in upper case roman letters: X - Y - etc.
Random variables
A Probability measure
Cumulative distribution functions
f(z) - and its cdf by F(z).
29. Some commonly used symbols for population parameters
The Range
the population mean
Simpson's Paradox
Divide the sum by the number of values.
30. Samples are drawn from two different populations such that the sample data drawn from one population is completely unrelated to the selection of sample data from the other population.
Binary data
Independent Selection
Valid measure
A sampling distribution
31. Is a process of selecting observations to obtain knowledge about a population. There are many methods to choose on which sample to do the observations.
Sampling
Greek letters
P-value
Descriptive
32. (or expectation) of a random variable is the sum of the probability of each possible outcome of the experiment multiplied by its payoff ('value'). Thus - it represents the average amount one 'expects' to win per bet if bets with identical odds are re
hypothesis
A data point
categorical variables
The Expected value
33. Consists of a number of independent trials repeated under identical conditions. On each trial - there are two possible outcomes.
methods of least squares
quantitative variables
A Distribution function
Binomial experiment
34. When info. in a contingency table is re-organized into more or less categories - relationships seen can change or reverse.
Warning
: Invalid argument supplied for foreach() in
/var/www/html/basicversity.com/show_quiz.php
on line
183
35. In Bayesian inference - this represents prior beliefs or other information that is available before new data or observations are taken into account.
Statistical adjustment
Prior probability
Type 1 Error
A population or statistical population
36. The probability of the observed value or something more extreme under the assumption that the null hypothesis is true.
Credence
P-value
the population variance
Simulation
37. Are written in corresponding lower case letters. For example x1 - x2 - ... - xn could be a sample corresponding to the random variable X.
Simple random sample
An experimental study
Particular realizations of a random variable
Power of a test
38. Is used to describe probability in a continuous probability distribution. For example - you can't say that the probability of a man being six feet tall is 20% - but you can say he has 20% of chances of being between five and six feet tall. Probabilit
Probability density
Quantitative variable
Null hypothesis
A statistic
39. There are four main levels of measurement used in statistics: Each of these have different degrees of usefulness in statistical research.
Binary data
A likelihood function
nominal - ordinal - interval - and ratio
Type 1 Error
40. A common goal for a statistical research project is to investigate causality - and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables or response.
the sample or population mean
Experimental and observational studies
Dependent Selection
Sampling
41. A numerical measure that describes an aspect of a sample.
Cumulative distribution functions
Likert scale
Statistic
Binary data
42. Is a function that gives the probability of all elements in a given space: see List of probability distributions
A probability distribution
An event
Mutual independence
Ratio measurements
43. Long-term upward or downward movement over time.
Trend
Observational study
Posterior probability
Inferential statistics
44. Are simply two different terms for the same thing. Add the given values
Law of Large Numbers
Residuals
Average and arithmetic mean
Parameter - or 'statistical parameter'
45. Also called correlation coefficient - is a numeric measure of the strength of linear relationship between two random variables (one can use it to quantify - for example - how shoe size and height are correlated in the population). An example is the P
Correlation
Interval measurements
Sample space
Variable
46. A subjective estimate of probability.
Credence
Sampling frame
Valid measure
Independent Selection
47. A collection of events is mutually independent if for any subset of the collection - the joint probability of all events occurring is equal to the product of the joint probabilities of the individual events. Think of the result of a series of coin-fl
methods of least squares
Mutual independence
Residuals
s-algebras
48. Given two random variables X and Y - the joint distribution of X and Y is the probability distribution of X and Y together.
The Mean of a random variable
Reliable measure
Joint distribution
The Range
49. Failing to reject a false null hypothesis.
A probability space
Trend
Independence or Statistical independence
Type 2 Error
50. Have no meaningful rank order among values.
Nominal measurements
A probability space
Descriptive statistics
The Expected value