SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
FRM Foundations Of Risk Management Quantitative Methods
Start Test
Study First
Subjects
:
business-skills
,
certifications
,
frm
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Significance =1
Returns over time for a combination of assets (combination of time series and cross - sectional data)
P(X=x - Y=y) = P(X=x) * P(Y=y)
Confidence level
Sample variance = (1/(k - 1))Summation(Yi - mean)^2
2. Implications of homoscedasticity
Conditional mean is time - varying - Conditional volatility is time - varying (more likely)
Based on an equation - P(A) = # of A/total outcomes
OLS estimators are unbiased - consistent - and normal regardless of homo or heterskedasticity - OLS estimates are efficient - Can use homoscedasticity - only variance formula - OLS is BLUE
Sample mean will near the population mean as the sample size increases
3. GARCH
T = (x - meanx)/(stddev(x)/sqrt(n)) - Symmetrical - mean = 0 - Variance = k/k - 2 - Slightly heavy tail (kurtosis>3)
Generalized Auto Regressive Conditional Heteroscedasticity model - GARCH(1 -1) is the weighted sum of a long term variance (weight=gamma) - the most recent squared return (weight=alpha) and the most recent variance (weight=beta)
In EWMA - the lambda parameter - In GARCH(1 -1) - sum of alpha and beta - Higher persistence implies slow decay toward the long - run average variance
When the sample size is large - the uncertainty about the value of the sample is very small
4. Unbiased
Translates a random number into a cumulative standard normal distribution - EXCEL: NORMSINV(RAND())
Mean of sampling distribution is the population mean
Statement of the error or precision of an estimate
Among all unbiased estimators - estimator with the smallest variance is efficient
5. Adjusted R^2
Choose parameters that maximize the likelihood of what observations occurring
Summation(Yi - m)^2 = 1 - Minimizes the sum of squares gaps
Unconditional is the same regardless of market or economic conditions (unrealistic) - Conditional depends on the economy - market - or other state
Adjusted R^2 does not increase from addition of new independent variables -Adjusted R^2 = 1 - (n - 1)/(n - k - 1) * (SSR/TSS) = 1 - su^2/sy^2
6. Central Limit Theorem(CLT)
Use historical simulation approach but use the EWMA weighting system
Sampling distribution of sample means tend to be normal
Translates a random number into a cumulative standard normal distribution - EXCEL: NORMSINV(RAND())
P(X=x - Y=y) = P(X=x) * P(Y=y)
7. Standard variable for non - normal distributions
Yi = B0 + B1Xi + ui
Omitted variable is correlated with regressor - Omitted variable is a determinant of the dependent variable
Z = (Y - meany)/(stddev(y)/sqrt(n))
Distribution with only two possible outcomes
8. Two ways to calculate historical volatility
Var(X) + Var(Y)
Compute series of periodic returns - Choose a weighting scheme to translate a series into a single metric
Reverse engineer the implied std dev from the market price - Cmarket = f(implied standard deviation)
SSR
9. Inverse transform method
Observe sample variance and compare it to hypothetical population variance (sample variance/population variance)(n - 1) = chi - squared - Non - negative and skewed right - approaches zero as n increases - mean = k where k = degrees of freedom - Varia
Flexible and postulate stochastic process or resample historical data - Full valuation on target date - More prone to model risk - Slow and loses precision due to sampling variation
Translates a random number into a cumulative standard normal distribution - EXCEL: NORMSINV(RAND())
Changes the sign of the random samples - appropriate when distribution is symmetric - creates twice as many replications
10. F distribution
F = [(SSR<restricted> - SSR<unrestricted>)/q]/(SSR<unrestricted>/(n - k<unrestricted> - 1)
Adjusted R^2 does not increase from addition of new independent variables -Adjusted R^2 = 1 - (n - 1)/(n - k - 1) * (SSR/TSS) = 1 - su^2/sy^2
Variance ratio distribution F = (variance(x)/variance(y)) - Greater sample variance is numerator - Nonnegative and skewed right - Approaches normal as df increases - Square of t - distribution has a F distribution with 1 -k df - M*F(m -n) = Chi - s
SSR
11. Variance of aX + bY
F(x) = (1/stddev(x)sqrt(2pi))e^ - (x - mean)^2/(2variance) - skew = 0 - Parsimony = only requires mean and variance - Summation stability = combination of two normal distributions is a normal distribution - Kurtosis = 3
P(Z>t)
(a^2)(variance(x)) + (b^2)(variance(y))
Rxy = Sxy/(Sx*Sy)
12. Type II Error
E[variance(n+t)] = VL + ((alpha + beta)^t)*(variance(n) - VL)
We accept a hypothesis that should have been rejected
Low Frequency - High Severity events
Standard error of the regression - SER = sqrt(SSR/(n - 2)) = sqrt((ei^2)/(n - 2)) SSR - Sum of squared residuals - Summation[(Yi - predicted Yi)^2] - Summation of each squared deviation between the actual Y and the predicted Y - Directly related
13. Unconditional vs conditional distributions
Covariance = (lambda)(cov(n - 1)) + (1 - lambda)(xn - 1)(yn - 1)
Unconditional is the same regardless of market or economic conditions (unrealistic) - Conditional depends on the economy - market - or other state
Summation((xi - mean)^k)/n
Depends upon lambda - which indicates the rate of occurrence of the random events (binomial) over a time interval - (lambda^k)/(k!) * e^( - lambda)
14. Maximum likelihood method
Sample variance = (1/(k - 1))Summation(Yi - mean)^2
Choose parameters that maximize the likelihood of what observations occurring
Variance = (1/m) summation(u<n - i>^2)
Doesn't imply added variable is significant - doesn't imply regressors are a true cause of the dependent variable - doesn't imply there's no OVB - doesn't imply you have the most appropriate set of regressors
15. Exponential distribution
Time to wait until an event takes place - F(x) = lambda e^( - lambdax) - Lambda = 1/beta
F = ½ ((t1^2)+(t2^2) - (correlation t1 t2))/(1 - 2correlation)
Variance(sample y) = (variance(y)/n)*(N - n/N - 1)
Weights are not a function of time - but based on the nature of the historic period (more similar to historic stake - greater the weight)
16. Key properties of linear regression
Regression can be non - linear in variables but must be linear in parameters
Sample variance = (1/(k - 1))Summation(Yi - mean)^2
Generation of a distribution of returns by use of random numbers - Return path decided by algorithm - Correlation must be modeled
Two parameters: alpha(center) and beta(shape) - - Popular for modeling recovery rates
17. Pooled data
Use historical simulation approach but use the EWMA weighting system
In EWMA - the lambda parameter - In GARCH(1 -1) - sum of alpha and beta - Higher persistence implies slow decay toward the long - run average variance
E(XY) - E(X)E(Y)
Returns over time for a combination of assets (combination of time series and cross - sectional data)
18. Historical std dev
Generalized Extreme Value Distribution - Uses a tail index - smaller index means fatter tails
Mean = lambda - Variance = lambda - Std dev = sqrt(lambda)
Simplest and most common way to estimate future volatility - Variance(t) = (1/N) Summation(r^2)
Omitted variable is correlated with regressor - Omitted variable is a determinant of the dependent variable
19. Central Limit Theorem
For n>30 - sample mean is approximately normal
Depends upon lambda - which indicates the rate of occurrence of the random events (binomial) over a time interval - (lambda^k)/(k!) * e^( - lambda)
Variance(sample y) = (variance(y)/n)*(N - n/N - 1)
Expected value of the sample mean is the population mean
20. Test for statistical independence
Variance(x) + Variance(Y) + 2*covariance(XY)
Two parameters: alpha(center) and beta(shape) - - Popular for modeling recovery rates
P(X=x - Y=y) = P(X=x) * P(Y=y)
Regression can be non - linear in variables but must be linear in parameters
21. Mean reversion in asset dynamics
Price/return tends to run towards a long - run level
Variance = summation(alpha weight)(u<n - i>^2) - alpha weights must sum to one
P(Z>t)
Attempts to increase accuracy by reducing sample variance instead of increasing sample size
22. Homoskedastic
Normal - Student's T - Chi - square - F distribution
Standard deviation of the sampling distribution SE = std dev(y)/sqrt(n)
Variance of conditional distribution of u(i) is constant - T - stat for slope of regression T = (b1 - beta)/SE(b1) - beta is a specified value for hypothesis test
F = ½ ((t1^2)+(t2^2) - (correlation t1 t2))/(1 - 2correlation)
23. Least squares estimator(m)
Sample mean will near the population mean as the sample size increases
Variance ratio distribution F = (variance(x)/variance(y)) - Greater sample variance is numerator - Nonnegative and skewed right - Approaches normal as df increases - Square of t - distribution has a F distribution with 1 -k df - M*F(m -n) = Chi - s
Has heavy tails
Summation(Yi - m)^2 = 1 - Minimizes the sum of squares gaps
24. Priori (classical) probability
Simplest and most common way to estimate future volatility - Variance(t) = (1/N) Summation(r^2)
Based on an equation - P(A) = # of A/total outcomes
Coefficent of determination - fraction of variance explained by independent variables - R^2 = ESS/TSS = 1 - (SSR/TSS)
X - t(Sx/sqrt(n))<meanx<x + t(Sx/sqrt(n)) - Random interval since it will vary by the sample
25. Regime - switching volatility model
Mean of sampling distribution is the population mean
Explained sum of squares - Summation[(predicted yi - meany)^2] - Squared distance between the predicted y and the mean of y
When a distribution switches from high to low volatility - but never in between - Will exhibit fat tails of unaccounted for
[1/(n - 1)]*summation((Xi - X)(Yi - Y))
26. Hazard rate of exponentially distributed random variable
Mean = np - Variance = npq - Std dev = sqrt(npq)
Parameters (mean - volatility - etc) vary over time due to variability in market conditions
Refers to whether distribution is symmetrical - Sigma^3 = E[(x - mean)^3]/sigma^3 - Positive skew = mean>median>mode - Negative skew = mean<median<mode - if zero - all are equal - Function of the third moment
1/lambda is hazard rate of default intensity - Lambda = 1/beta - f(x) = lambda e^( - lambdax) -F(x) = 1 - e^( - lambda*x)
27. Two requirements of OVB
Omitted variable is correlated with regressor - Omitted variable is a determinant of the dependent variable
Coefficent of determination - fraction of variance explained by independent variables - R^2 = ESS/TSS = 1 - (SSR/TSS)
We accept a hypothesis that should have been rejected
SE(predicted std dev) = std dev * sqrt(1/2T) - Ten times more precision needs 100 times more replications
28. Perfect multicollinearity
Coefficent of determination - fraction of variance explained by independent variables - R^2 = ESS/TSS = 1 - (SSR/TSS)
Explained sum of squares - Summation[(predicted yi - meany)^2] - Squared distance between the predicted y and the mean of y
95% = 1.65 99% = 2.33 For one - tailed tests
When one regressor is a perfect linear function of the other regressors
29. POT
Reverse engineer the implied std dev from the market price - Cmarket = f(implied standard deviation)
Peaks over threshold - Collects dataset in excess of some threshold
Average return across assets on a given day
Adjusted R^2 does not increase from addition of new independent variables -Adjusted R^2 = 1 - (n - 1)/(n - k - 1) * (SSR/TSS) = 1 - su^2/sy^2
30. Variance of X+b
Confidence set for two coefficients - two dimensional analog for the confidence interval
Variance(x)
In EWMA - the lambda parameter - In GARCH(1 -1) - sum of alpha and beta - Higher persistence implies slow decay toward the long - run average variance
Changes the sign of the random samples - appropriate when distribution is symmetric - creates twice as many replications
31. BLUE
Best Linear Unbiased Estimator - Sample mean for samples that are i.i.d.
Observe sample variance and compare it to hypothetical population variance (sample variance/population variance)(n - 1) = chi - squared - Non - negative and skewed right - approaches zero as n increases - mean = k where k = degrees of freedom - Varia
Weighted least squares estimator - Weights the squares to account for heteroskedasticity and is BLUE
Adjusted R^2 does not increase from addition of new independent variables -Adjusted R^2 = 1 - (n - 1)/(n - k - 1) * (SSR/TSS) = 1 - su^2/sy^2
32. Hybrid method for conditional volatility
Confidence level
OLS estimators are unbiased - consistent - and normal regardless of homo or heterskedasticity - OLS estimates are efficient - Can use homoscedasticity - only variance formula - OLS is BLUE
F(x) = (1/(beta tao(alpha)) e^( - x/beta) * (x/beta)^(alpha - 1) - Alpha = 1 - becomes exponential - Alpha = k/2 beta = 2 - becomes chi - squared
Use historical simulation approach but use the EWMA weighting system
33. SER
1/lambda is hazard rate of default intensity - Lambda = 1/beta - f(x) = lambda e^( - lambdax) -F(x) = 1 - e^( - lambda*x)
Standard error of error term - SER = sqrt(SSR/(n - k - 1)) - K is the # of slope coefficients
Returns over time for an individual asset
Depends upon lambda - which indicates the rate of occurrence of the random events (binomial) over a time interval - (lambda^k)/(k!) * e^( - lambda)
34. Joint probability functions
Instead of independent samples - systematically fills space left by previous numbers in the series - Std error shrinks at 1/k instead of 1/sqrt(k) but accuracy determination is hard since variables are not independent
Probability that the random variables take on certain values simultaneously
Among all unbiased estimators - estimator with the smallest variance is efficient
Translates a random number into a cumulative standard normal distribution - EXCEL: NORMSINV(RAND())
35. Variance of X+Y assuming dependence
Variance(x)
Combine to form distribution with leptokurtosis (heavy tails)
Variance(x) + Variance(Y) + 2*covariance(XY)
Confidence level
36. Sample mean
For n>30 - sample mean is approximately normal
Sample mean +/ - t*(stddev(s)/sqrt(n))
F(x) = (1/stddev(x)sqrt(2pi))e^ - (x - mean)^2/(2variance) - skew = 0 - Parsimony = only requires mean and variance - Summation stability = combination of two normal distributions is a normal distribution - Kurtosis = 3
Expected value of the sample mean is the population mean
37. Variance of X+Y
Does not depend on a prior event or information
Var(X) + Var(Y)
Distribution with only two possible outcomes
Least absolute deviations estimator - used when extreme outliers are not uncommon
38. Variance of sample mean
Make parametric assumptions about covariances of each position and extend them to entire portfolio - Problem: correlations change during stressful market events
Peaks over threshold - Collects dataset in excess of some threshold
Attempts to sample along more important paths
Variance(y)/n = variance of sample Y
39. Cholesky factorization (decomposition)
F(x) = (1/stddev(x)sqrt(2pi))e^ - (x - mean)^2/(2variance) - skew = 0 - Parsimony = only requires mean and variance - Summation stability = combination of two normal distributions is a normal distribution - Kurtosis = 3
Two parameters: alpha(center) and beta(shape) - - Popular for modeling recovery rates
Create covariance matrix - Covariance matrix (R) is decomposed into lower - triangle matrix (L) and upper - triangle matrix (U) - are mirrors of each other - R=LU - solve for all matrix elements - LU is the result and is used to simulate vendor varia
Compute series of periodic returns - Choose a weighting scheme to translate a series into a single metric
40. Multivariate Density Estimation (MDE)
Transformed to a unit variable - Mean = 0 Variance = 1
If variance of the conditional distribution of u(i) is not constant
Standard deviation of the sampling distribution SE = std dev(y)/sqrt(n)
Weights are not a function of time - but based on the nature of the historic period (more similar to historic stake - greater the weight)
41. Lognormal
1/lambda is hazard rate of default intensity - Lambda = 1/beta - f(x) = lambda e^( - lambdax) -F(x) = 1 - e^( - lambda*x)
Attempts to increase accuracy by reducing sample variance instead of increasing sample size
When asset return(r) is normally distributed - the continuously compounded future asset price level is lognormal - Reverse is true - if a variable is lognormal - its natural log is normal
When the sample size is large - the uncertainty about the value of the sample is very small
42. Variance of aX
Special type of pooled data in which the cross sectional unit is surveyed over time
Price/return tends to run towards a long - run level
(a^2)(variance(x)
More than one random variable
43. Confidence interval for sample mean
Dataset is parsed into blocks with greater length than the periodicity - Observations must be i.i.d.
X - t(Sx/sqrt(n))<meanx<x + t(Sx/sqrt(n)) - Random interval since it will vary by the sample
When one regressor is a perfect linear function of the other regressors
Combine to form distribution with leptokurtosis (heavy tails)
44. Simulation models
Compute series of periodic returns - Choose a weighting scheme to translate a series into a single metric
Flexible and postulate stochastic process or resample historical data - Full valuation on target date - More prone to model risk - Slow and loses precision due to sampling variation
(a^2)(variance(x)
Make parametric assumptions about covariances of each position and extend them to entire portfolio - Problem: correlations change during stressful market events
45. Extending the HS approach for computing value of a portfolio
46. Non - parametric vs parametric calculation of VaR
Z = (Y - meany)/(stddev(y)/sqrt(n))
Compute series of periodic returns - Choose a weighting scheme to translate a series into a single metric
Weights are not a function of time - but based on the nature of the historic period (more similar to historic stake - greater the weight)
Non - parametric directly uses a historical dataset - Parametric imposes a specific distribution assumption
47. Confidence ellipse
Infinite number of values within an interval - P(a<x<b) = interval from a to b of f(x)dx
Confidence set for two coefficients - two dimensional analog for the confidence interval
In EWMA - the lambda parameter - In GARCH(1 -1) - sum of alpha and beta - Higher persistence implies slow decay toward the long - run average variance
Instead of independent samples - systematically fills space left by previous numbers in the series - Std error shrinks at 1/k instead of 1/sqrt(k) but accuracy determination is hard since variables are not independent
48. Importance sampling technique
Weights are not a function of time - but based on the nature of the historic period (more similar to historic stake - greater the weight)
Statement of the error or precision of an estimate
Attempts to sample along more important paths
P(Z>t)
49. Difference between population and sample variance
Population denominator = n - Sample denominator = n - 1
Variance(sample y) = (variance(y)/n)*(N - n/N - 1)
Sample mean will near the population mean as the sample size increases
Non - parametric directly uses a historical dataset - Parametric imposes a specific distribution assumption
50. Simplified standard (un - weighted) variance
dS<t> = (mean<t>)(S<t>)dt + stddev(t)S<t>dt- GBM - Geometric Brownian Motion - Represented as drift + shock - Drift = mean * change in time - Shock = std dev E sqrt(change in time)
Variance = (1/m) summation(u<n - i>^2)
Var(X) + Var(Y)
Translates a random number into a cumulative standard normal distribution - EXCEL: NORMSINV(RAND())