SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
CLEP General Mathematics: Probability And Statistics
Start Test
Study First
Subjects
:
clep
,
math
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. A variable describes an individual by placing the individual into a category or a group.
Skewness
Law of Parsimony
Qualitative variable
Statistical adjustment
2. When info. in a contingency table is re-organized into more or less categories - relationships seen can change or reverse.
Warning
: Invalid argument supplied for foreach() in
/var/www/html/basicversity.com/show_quiz.php
on line
183
3. Is a function that gives the probability of all elements in a given space: see List of probability distributions
Statistical adjustment
Particular realizations of a random variable
A probability distribution
Outlier
4. Is the exact middle value of a set of numbers Arrange the numbers in numerical order. Find the value in the middle of the list.
Simpson's Paradox
A population or statistical population
hypothesis
The median value
5. Some commonly used symbols for sample statistics
Bias
Sampling
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
Outlier
6. Patterns in the data may be modeled in a way that accounts for randomness and uncertainty in the observations - and are then used for drawing inferences about the process or population being studied; this is called
Simpson's Paradox
inferential statistics
The Range
A sample
7. Rejecting a true null hypothesis.
Type 1 Error
Simple random sample
categorical variables
methods of least squares
8. Can be a population parameter - a distribution parameter - an unobserved parameter (with different shades of meaning). In statistics - this is often a quantity to be estimated.
Warning
: Invalid argument supplied for foreach() in
/var/www/html/basicversity.com/show_quiz.php
on line
183
9. Is often denoted by placing a caret over the corresponding symbol - e.g. - pronounced 'theta hat'.
The Mean of a random variable
An estimate of a parameter
Conditional distribution
Quantitative variable
10. Are usually written in upper case roman letters: X - Y - etc.
Probability density
Individual
applied statistics
Random variables
11. In particular - the pdf of the standard normal distribution is denoted by
Beta value
f(z) - and its cdf by F(z).
A random variable
Probability and statistics
12. (or just likelihood) is a conditional probability function considered a function of its second argument with its first argument held fixed. For example - imagine pulling a numbered ball with the number k from a bag of n balls - numbered 1 to n. Then
Block
That value is the median value
A likelihood function
Outlier
13. Statistical methods can be used for summarizing or describing a collection of data; this is called
descriptive statistics
The sample space
Parameter
That value is the median value
14. Are usually written with upper case calligraphic (e.g. F for the set of sets on which we define the probability P)
The Expected value
An experimental study
s-algebras
Sample space
15. Is that part of a population which is actually observed.
A sample
Law of Large Numbers
Nominal measurements
A sampling distribution
16. Data are gathered and correlations between predictors and response are investigated.
Kurtosis
Count data
observational study
Qualitative variable
17. Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically - sometimes they are grouped together as
Mutual independence
Statistic
categorical variables
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
18. Probability of rejecting a true null hypothesis.
Interval measurements
That value is the median value
Alpha value (Level of Significance)
Joint probability
19. Where the null hypothesis is falsely rejected giving a 'false positive'.
methods of least squares
Type I errors
A sampling distribution
Experimental and observational studies
20. A measure that is relevant or appropriate as a representation of that property.
Prior probability
Variable
Valid measure
Descriptive statistics
21. Can be - for example - the possible outcomes of a dice roll (but it is not assigned a value). The distribution function of a random variable gives the probability of different results. We can also derive the mean and variance of a random variable.
Simulation
A Distribution function
A random variable
Inferential statistics
22. Is data that can take only two values - usually represented by 0 and 1.
An Elementary event
variance of X
Binary data
An event
23. The errors - or difference between the estimated response y^i and the actual measured response yi - collectively
Residuals
Step 2 of a statistical experiment
Greek letters
the sample or population mean
24. Is used to describe probability in a continuous probability distribution. For example - you can't say that the probability of a man being six feet tall is 20% - but you can say he has 20% of chances of being between five and six feet tall. Probabilit
Probability density
The arithmetic mean of a set of numbers x1 - x2 - ... - xn
Statistical dispersion
Type II errors
25. S^2
experimental studies and observational studies.
the population variance
Individual
Trend
26. In number theory - scatter plots of data generated by a distribution function may be transformed with familiar tools used in statistics to reveal underlying patterns - which may then lead to
hypotheses
Quantitative variable
Average and arithmetic mean
An Elementary event
27. (or atomic event) is an event with only one element. For example - when pulling a card out of a deck - 'getting the jack of spades' is an elementary event - while 'getting a king or an ace' is not.
Pairwise independence
Marginal distribution
Placebo effect
An Elementary event
28. The probability of correctly detecting a false null hypothesis.
Qualitative variable
Residuals
Power of a test
The Expected value
29. Is a function of the known data that is used to estimate an unknown parameter; an estimate is the result from the actual application of the function to a particular set of data. The mean can be used as an estimator.
A population or statistical population
Variable
Estimator
A sample
30. (also called statistical variability) is a measure of how diverse some data is. It can be expressed by the variance or the standard deviation.
Observational study
Cumulative distribution functions
Statistical dispersion
Valid measure
31. A common goal for a statistical research project is to investigate causality - and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables or response.
Experimental and observational studies
Prior probability
Sample space
Qualitative variable
32. ?r
Power of a test
An experimental study
the population cumulants
observational study
33. Summarize the population data by describing what was observed in the sample numerically or graphically. Numerical descriptors include mean and standard deviation for continuous data types (like heights or weights) - while frequency and percentage are
Block
Descriptive statistics
expected value of X
Probability density functions
34. Describes the spread in the values of the sample statistic when many samples are taken.
An estimate of a parameter
Dependent Selection
Variability
inferential statistics
35. A variable has a value or numerical measurement for which operations such as addition or averaging make sense.
Conditional distribution
The Range
s-algebras
Quantitative variable
36. There are two major types of causal statistical studies: In both types of studies - the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies
experimental studies and observational studies.
A likelihood function
Mutual independence
the population correlation
37. Is the function that gives the probability distribution of a random variable. It cannot be negative - and its integral on the probability space is equal to 1.
A sample
Type II errors
Probability density functions
A Distribution function
38. Uses patterns in the sample data to draw inferences about the population represented - accounting for randomness. These inferences may take the form of: answering yes/no questions about the data (hypothesis testing) - estimating numerical characteris
Joint distribution
Inferential statistics
quantitative variables
Type II errors
39. (cdfs) are denoted by upper case letters - e.g. F(x).
Prior probability
Independent Selection
Cumulative distribution functions
A data set
40. Probability of accepting a false null hypothesis.
Residuals
Beta value
A random variable
An experimental study
41. Design of experiments - using blocking to reduce the influence of confounding variables - and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage - the experimenters a
The average - or arithmetic mean
Power of a test
Step 2 of a statistical experiment
Count data
42. Is the length of the smallest interval which contains all the data.
The Range
Beta value
applied statistics
Estimator
43. Is a subset of the sample space - to which a probability can be assigned. For example - on rolling a die - 'getting a five or a six' is an event (with a probability of one third if the die is fair).
An event
Parameter
Random variables
A Random vector
44. Are written in corresponding lower case letters. For example x1 - x2 - ... - xn could be a sample corresponding to the random variable X.
Joint distribution
Marginal probability
Simpson's Paradox
Particular realizations of a random variable
45. Describes a characteristic of an individual to be measured or observed.
Simple random sample
Estimator
Statistical dispersion
Variable
46. Failing to reject a false null hypothesis.
Lurking variable
Probability density functions
Type 2 Error
An experimental study
47. Many statistical methods seek to minimize the mean-squared error - and these are called
methods of least squares
variance of X
Joint probability
A probability distribution
48. The collection of all possible outcomes in an experiment.
The sample space
Type 2 Error
Sample space
Statistics
49. Also called correlation coefficient - is a numeric measure of the strength of linear relationship between two random variables (one can use it to quantify - for example - how shoe size and height are correlated in the population). An example is the P
Correlation
Lurking variable
That value is the median value
the sample mean - the sample variance s2 - the sample correlation coefficient r - the sample cumulants kr.
50. Two events are independent if the outcome of one does not affect that of the other (for example - getting a 1 on one die roll does not affect the probability of getting a 1 on a second roll). Similarly - when we assert that two random variables are i
Placebo effect
Block
Coefficient of determination
Independence or Statistical independence