SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
FRM Foundations Of Risk Management Quantitative Methods
Start Test
Study First
Subjects
:
business-skills
,
certifications
,
frm
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Binomial distribution
Covariance = (lambda)(cov(n - 1)) + (1 - lambda)(xn - 1)(yn - 1)
Sum of n i.i.d. Bernouli variables - Probability of k successes: (combination n over k)(p^k)(1 - p)^(n - k) - (n over k) = (n!)/((n - k)!k!)
Variance ratio distribution F = (variance(x)/variance(y)) - Greater sample variance is numerator - Nonnegative and skewed right - Approaches normal as df increases - Square of t - distribution has a F distribution with 1 -k df - M*F(m -n) = Chi - s
Can Use alpha and beta weights to solve for the long - run average variance - VL = w/(1 - alpha - beta)
2. Variance of X+Y assuming dependence
Generalized exponential distribution - Exponential is a Weibull distribution with alpha = 1.0 - F(x) = 1 - e^ - (x/beta)^alpha
Variance(x) + Variance(Y) + 2*covariance(XY)
F(x) = (1/(beta tao(alpha)) e^( - x/beta) * (x/beta)^(alpha - 1) - Alpha = 1 - becomes exponential - Alpha = k/2 beta = 2 - becomes chi - squared
Special type of pooled data in which the cross sectional unit is surveyed over time
3. Variance of X - Y assuming dependence
Variance(X) + Variance(Y) - 2*covariance(XY)
Simplest approach to extending horizon - J - period VaR = sqrt(J) * 1 - period VaR - Only applies under i.i.d
Non - parametric directly uses a historical dataset - Parametric imposes a specific distribution assumption
Probability that the random variables take on certain values simultaneously
4. Homoskedastic
Variance of conditional distribution of u(i) is constant - T - stat for slope of regression T = (b1 - beta)/SE(b1) - beta is a specified value for hypothesis test
Parameters (mean - volatility - etc) vary over time due to variability in market conditions
Has heavy tails
Generalized exponential distribution - Exponential is a Weibull distribution with alpha = 1.0 - F(x) = 1 - e^ - (x/beta)^alpha
5. Four sampling distributions
Warning
: Invalid argument supplied for foreach() in
/var/www/html/basicversity.com/show_quiz.php
on line
183
6. Importance sampling technique
[1/(n - 1)]*summation((Xi - X)(Yi - Y))
Attempts to sample along more important paths
E[(Y - meany)^2] = E(Y^2) - [E(Y)]^2
Measures degree of "peakedness" - Value of 3 indicates normal distribution - Sigma^4 = E[(X - mean)^4]/sigma^4 - Function of fourth moment
7. Central Limit Theorem
Measures degree of "peakedness" - Value of 3 indicates normal distribution - Sigma^4 = E[(X - mean)^4]/sigma^4 - Function of fourth moment
For n>30 - sample mean is approximately normal
Attempts to increase accuracy by reducing sample variance instead of increasing sample size
Depends upon lambda - which indicates the rate of occurrence of the random events (binomial) over a time interval - (lambda^k)/(k!) * e^( - lambda)
8. POT
Peaks over threshold - Collects dataset in excess of some threshold
F = ½ ((t1^2)+(t2^2) - (correlation t1 t2))/(1 - 2correlation)
Variance of conditional distribution of u(i) is constant - T - stat for slope of regression T = (b1 - beta)/SE(b1) - beta is a specified value for hypothesis test
Summation((xi - mean)^k)/n
9. Test for statistical independence
T = (x - meanx)/(stddev(x)/sqrt(n)) - Symmetrical - mean = 0 - Variance = k/k - 2 - Slightly heavy tail (kurtosis>3)
Random walk (usually acceptable) - Constant volatility (unlikely)
P(X=x - Y=y) = P(X=x) * P(Y=y)
Z = (Y - meany)/(stddev(y)/sqrt(n))
10. Perfect multicollinearity
We reject a hypothesis that is actually true
Expected value of the sample mean is the population mean
When one regressor is a perfect linear function of the other regressors
Omitted variable is correlated with regressor - Omitted variable is a determinant of the dependent variable
11. Standard normal distribution
Generalized Auto Regressive Conditional Heteroscedasticity model - GARCH(1 -1) is the weighted sum of a long term variance (weight=gamma) - the most recent squared return (weight=alpha) and the most recent variance (weight=beta)
Change in S = S<t - 1>(meanchange in time + stddev E * sqrt(change in time))
95% = 1.65 99% = 2.33 For one - tailed tests
Transformed to a unit variable - Mean = 0 Variance = 1
12. Continuous representation of the GBM
Probability that the random variables take on certain values simultaneously
Regression can be non - linear in variables but must be linear in parameters
dS<t> = (mean<t>)(S<t>)dt + stddev(t)S<t>dt- GBM - Geometric Brownian Motion - Represented as drift + shock - Drift = mean * change in time - Shock = std dev E sqrt(change in time)
Nonlinearity
13. Skewness
Refers to whether distribution is symmetrical - Sigma^3 = E[(x - mean)^3]/sigma^3 - Positive skew = mean>median>mode - Negative skew = mean<median<mode - if zero - all are equal - Function of the third moment
Conditional mean is time - varying - Conditional volatility is time - varying (more likely)
If variance of the conditional distribution of u(i) is not constant
F(x) = (1/(beta tao(alpha)) e^( - x/beta) * (x/beta)^(alpha - 1) - Alpha = 1 - becomes exponential - Alpha = k/2 beta = 2 - becomes chi - squared
14. Bootstrap method
Regression can be non - linear in variables but must be linear in parameters
Historical simulation with replacement - Vector is chosen at random from historic period for each simulated period
Reverse engineer the implied std dev from the market price - Cmarket = f(implied standard deviation)
Transformed to a unit variable - Mean = 0 Variance = 1
15. Chi - squared distribution
Changes the sign of the random samples - appropriate when distribution is symmetric - creates twice as many replications
Observe sample variance and compare it to hypothetical population variance (sample variance/population variance)(n - 1) = chi - squared - Non - negative and skewed right - approaches zero as n increases - mean = k where k = degrees of freedom - Varia
Assumes a value among a finite set including x1 - x2 - etc - P(X=xk) = f(xk)
Generalized Extreme Value Distribution - Uses a tail index - smaller index means fatter tails
16. Variance of sample mean
Variance(y)/n = variance of sample Y
Reverse engineer the implied std dev from the market price - Cmarket = f(implied standard deviation)
Make parametric assumptions about covariances of each position and extend them to entire portfolio - Problem: correlations change during stressful market events
Dataset is parsed into blocks with greater length than the periodicity - Observations must be i.i.d.
17. Two assumptions of square root rule
Coefficent of determination - fraction of variance explained by independent variables - R^2 = ESS/TSS = 1 - (SSR/TSS)
Z = (Y - meany)/(stddev(y)/sqrt(n))
Mean = np - Variance = npq - Std dev = sqrt(npq)
Random walk (usually acceptable) - Constant volatility (unlikely)
18. Two requirements of OVB
Omitted variable is correlated with regressor - Omitted variable is a determinant of the dependent variable
Only requires two parameters = mean and variance
Infinite number of values within an interval - P(a<x<b) = interval from a to b of f(x)dx
Returns over time for an individual asset
19. Conditional probability functions
Probability of an outcome given another outcome P(Y|X) = P(X -Y)/P(X) - P(B|A) = P(A and B)/P(A)
When a distribution switches from high to low volatility - but never in between - Will exhibit fat tails of unaccounted for
Sample variance = (1/(k - 1))Summation(Yi - mean)^2
Best Linear Unbiased Estimator - Sample mean for samples that are i.i.d.
20. F distribution
Instead of independent samples - systematically fills space left by previous numbers in the series - Std error shrinks at 1/k instead of 1/sqrt(k) but accuracy determination is hard since variables are not independent
Random walk (usually acceptable) - Constant volatility (unlikely)
Variance ratio distribution F = (variance(x)/variance(y)) - Greater sample variance is numerator - Nonnegative and skewed right - Approaches normal as df increases - Square of t - distribution has a F distribution with 1 -k df - M*F(m -n) = Chi - s
Low Frequency - High Severity events
21. Priori (classical) probability
Based on an equation - P(A) = # of A/total outcomes
Variance = summation(alpha weight)(u<n - i>^2) - alpha weights must sum to one
Depends upon lambda - which indicates the rate of occurrence of the random events (binomial) over a time interval - (lambda^k)/(k!) * e^( - lambda)
F(x) = (1/stddev(x)sqrt(2pi))e^ - (x - mean)^2/(2variance) - skew = 0 - Parsimony = only requires mean and variance - Summation stability = combination of two normal distributions is a normal distribution - Kurtosis = 3
22. Panel data (longitudinal or micropanel)
Variance(x)
P(X=x - Y=y) = P(X=x) * P(Y=y)
Transformed to a unit variable - Mean = 0 Variance = 1
Special type of pooled data in which the cross sectional unit is surveyed over time
23. Poisson Distribution
We accept a hypothesis that should have been rejected
Flexible and postulate stochastic process or resample historical data - Full valuation on target date - More prone to model risk - Slow and loses precision due to sampling variation
Sample mean will near the population mean as the sample size increases
Depends upon lambda - which indicates the rate of occurrence of the random events (binomial) over a time interval - (lambda^k)/(k!) * e^( - lambda)
24. Persistence
Variance reverts to a long run level
In EWMA - the lambda parameter - In GARCH(1 -1) - sum of alpha and beta - Higher persistence implies slow decay toward the long - run average variance
(a^2)(variance(x)) + (b^2)(variance(y))
Summation(Yi - m)^2 = 1 - Minimizes the sum of squares gaps
25. Type I error
Based on an equation - P(A) = # of A/total outcomes
E(mean) = mean
We reject a hypothesis that is actually true
P(X=x - Y=y) = P(X=x) * P(Y=y)
26. Beta distribution
Based on an equation - P(A) = # of A/total outcomes
Two parameters: alpha(center) and beta(shape) - - Popular for modeling recovery rates
Ignores order of observations (no weight for most recent observations) - Has a ghosting feature where data points are dropped due to length of window
Does not depend on a prior event or information
27. Poisson distribution equations for mean variance and std deviation
Mean = lambda - Variance = lambda - Std dev = sqrt(lambda)
Best Linear Unbiased Estimator - Sample mean for samples that are i.i.d.
Covariance = (lambda)(cov(n - 1)) + (1 - lambda)(xn - 1)(yn - 1)
Exponentially Weighted Moving Average - Weights decline in constant proportion given by lambda
28. Cholesky factorization (decomposition)
Time to wait until an event takes place - F(x) = lambda e^( - lambdax) - Lambda = 1/beta
SE(predicted std dev) = std dev * sqrt(1/2T) - Ten times more precision needs 100 times more replications
Population denominator = n - Sample denominator = n - 1
Create covariance matrix - Covariance matrix (R) is decomposed into lower - triangle matrix (L) and upper - triangle matrix (U) - are mirrors of each other - R=LU - solve for all matrix elements - LU is the result and is used to simulate vendor varia
29. Economical(elegant)
Variance(x)
Only requires two parameters = mean and variance
Can Use alpha and beta weights to solve for the long - run average variance - VL = w/(1 - alpha - beta)
P - value
30. SER
Standard error of the regression - SER = sqrt(SSR/(n - 2)) = sqrt((ei^2)/(n - 2)) SSR - Sum of squared residuals - Summation[(Yi - predicted Yi)^2] - Summation of each squared deviation between the actual Y and the predicted Y - Directly related
Variance = summation(alpha weight)(u<n - i>^2) - alpha weights must sum to one
Adjusted R^2 does not increase from addition of new independent variables -Adjusted R^2 = 1 - (n - 1)/(n - k - 1) * (SSR/TSS) = 1 - su^2/sy^2
Returns over time for an individual asset
31. Maximum likelihood method
Changes the sign of the random samples - appropriate when distribution is symmetric - creates twice as many replications
Variance(x)
Non - parametric directly uses a historical dataset - Parametric imposes a specific distribution assumption
Choose parameters that maximize the likelihood of what observations occurring
32. K - th moment
Contains variables not explicit in model - Accounts for randomness
Summation((xi - mean)^k)/n
OLS estimators are unbiased - consistent - and normal regardless of homo or heterskedasticity - OLS estimates are efficient - Can use homoscedasticity - only variance formula - OLS is BLUE
Variance(X) + Variance(Y) - 2*covariance(XY)
33. Critical z values
95% = 1.65 99% = 2.33 For one - tailed tests
Random walk (usually acceptable) - Constant volatility (unlikely)
In EWMA - the lambda parameter - In GARCH(1 -1) - sum of alpha and beta - Higher persistence implies slow decay toward the long - run average variance
Coefficent of determination - fraction of variance explained by independent variables - R^2 = ESS/TSS = 1 - (SSR/TSS)
34. LAD
i = ln(Si/Si - 1)
Apply today's weight for yesterday's returns "what would happen if we held this portfolio in the past"
When one regressor is a perfect linear function of the other regressors
Least absolute deviations estimator - used when extreme outliers are not uncommon
35. Overall F - statistic
(a^2)(variance(x)
Statement of the error or precision of an estimate
Choose parameters that maximize the likelihood of what observations occurring
F = ½ ((t1^2)+(t2^2) - (correlation t1 t2))/(1 - 2correlation)
36. BLUE
Adjusted R^2 does not increase from addition of new independent variables -Adjusted R^2 = 1 - (n - 1)/(n - k - 1) * (SSR/TSS) = 1 - su^2/sy^2
Var(X) + Var(Y)
Best Linear Unbiased Estimator - Sample mean for samples that are i.i.d.
Time to wait until an event takes place - F(x) = lambda e^( - lambdax) - Lambda = 1/beta
37. EWMA
Standard deviation of the sampling distribution SE = std dev(y)/sqrt(n)
i = ln(Si/Si - 1)
Infinite number of values within an interval - P(a<x<b) = interval from a to b of f(x)dx
Exponentially Weighted Moving Average - Weights decline in constant proportion given by lambda
38. Unconditional vs conditional distributions
Unconditional is the same regardless of market or economic conditions (unrealistic) - Conditional depends on the economy - market - or other state
Yi = B0 + B1Xi + ui
Var(X) + Var(Y)
Assumes a value among a finite set including x1 - x2 - etc - P(X=xk) = f(xk)
39. Reliability
Mean = lambda - Variance = lambda - Std dev = sqrt(lambda)
Statement of the error or precision of an estimate
Variance(x)
Can Use alpha and beta weights to solve for the long - run average variance - VL = w/(1 - alpha - beta)
40. Gamma distribution
Transformed to a unit variable - Mean = 0 Variance = 1
T = (x - meanx)/(stddev(x)/sqrt(n)) - Symmetrical - mean = 0 - Variance = k/k - 2 - Slightly heavy tail (kurtosis>3)
F(x) = (1/(beta tao(alpha)) e^( - x/beta) * (x/beta)^(alpha - 1) - Alpha = 1 - becomes exponential - Alpha = k/2 beta = 2 - becomes chi - squared
Distribution with only two possible outcomes
41. GARCH
Depends on whether X and mean are positively or negatively correlated - Beta1 = beta1 + correlation(x -mean)*(stddev(mean)/stddev(x))
Apply today's weight for yesterday's returns "what would happen if we held this portfolio in the past"
Generalized Auto Regressive Conditional Heteroscedasticity model - GARCH(1 -1) is the weighted sum of a long term variance (weight=gamma) - the most recent squared return (weight=alpha) and the most recent variance (weight=beta)
If variance of the conditional distribution of u(i) is not constant
42. SER
Standard error of error term - SER = sqrt(SSR/(n - k - 1)) - K is the # of slope coefficients
Sample mean will near the population mean as the sample size increases
Least absolute deviations estimator - used when extreme outliers are not uncommon
Probability of an outcome given another outcome P(Y|X) = P(X -Y)/P(X) - P(B|A) = P(A and B)/P(A)
43. Difference between population and sample variance
Population denominator = n - Sample denominator = n - 1
Coefficent of determination - fraction of variance explained by independent variables - R^2 = ESS/TSS = 1 - (SSR/TSS)
Variance(x) + Variance(Y) + 2*covariance(XY)
Concerned with a single random variable (ex. Roll of a die)
44. Statistical (or empirical) model
Random walk (usually acceptable) - Constant volatility (unlikely)
Returns over time for a combination of assets (combination of time series and cross - sectional data)
Yi = B0 + B1Xi + ui
E(XY) - E(X)E(Y)
45. Law of Large Numbers
Sample mean will near the population mean as the sample size increases
Variance = summation(alpha weight)(u<n - i>^2) - alpha weights must sum to one
Confidence level
Rxy = Sxy/(Sx*Sy)
46. Consistent
When the sample size is large - the uncertainty about the value of the sample is very small
P(Z>t)
Adjusted R^2 does not increase from addition of new independent variables -Adjusted R^2 = 1 - (n - 1)/(n - k - 1) * (SSR/TSS) = 1 - su^2/sy^2
OLS estimators are unbiased - consistent - and normal regardless of homo or heterskedasticity - OLS estimates are efficient - Can use homoscedasticity - only variance formula - OLS is BLUE
47. Kurtosis
Measures degree of "peakedness" - Value of 3 indicates normal distribution - Sigma^4 = E[(X - mean)^4]/sigma^4 - Function of fourth moment
Standard error of error term - SER = sqrt(SSR/(n - k - 1)) - K is the # of slope coefficients
Summation((xi - mean)^k)/n
Variance(x)
48. GEV
Generalized Extreme Value Distribution - Uses a tail index - smaller index means fatter tails
Sum of n i.i.d. Bernouli variables - Probability of k successes: (combination n over k)(p^k)(1 - p)^(n - k) - (n over k) = (n!)/((n - k)!k!)
Based on an equation - P(A) = # of A/total outcomes
Variance = (1/m) summation(u<n - i>^2)
49. GPD
Generalized Pareto Distribution - Models distribution of POT - Empirical distributions are rarely sufficient for this model
Parameters (mean - volatility - etc) vary over time due to variability in market conditions
Contains variables not explicit in model - Accounts for randomness
F = [(SSR<restricted> - SSR<unrestricted>)/q]/(SSR<unrestricted>/(n - k<unrestricted> - 1)
50. WLS
Contains variables not explicit in model - Accounts for randomness
We reject a hypothesis that is actually true
Changes the sign of the random samples - appropriate when distribution is symmetric - creates twice as many replications
Weighted least squares estimator - Weights the squares to account for heteroskedasticity and is BLUE