SUBJECTS
|
BROWSE
|
CAREER CENTER
|
POPULAR
|
JOIN
|
LOGIN
Business Skills
|
Soft Skills
|
Basic Literacy
|
Certifications
About
|
Help
|
Privacy
|
Terms
|
Email
Search
Test your basic knowledge |
FRM Foundations Of Risk Management Quantitative Methods
Start Test
Study First
Subjects
:
business-skills
,
certifications
,
frm
Instructions:
Answer 50 questions in 15 minutes.
If you are not ready to take this test, you can
study here
.
Match each statement with the correct term.
Don't refresh. All questions and answers are randomly picked and ordered every time you load a test.
This is a study tool. The 3 wrong answers for each question are randomly chosen from answers to other questions. So, you might find at times the answers obvious, but you will see it re-enforces your understanding as you take the test each time.
1. Inverse transform method
Returns over time for a combination of assets (combination of time series and cross - sectional data)
Sum of n i.i.d. Bernouli variables - Probability of k successes: (combination n over k)(p^k)(1 - p)^(n - k) - (n over k) = (n!)/((n - k)!k!)
Distribution with only two possible outcomes
Translates a random number into a cumulative standard normal distribution - EXCEL: NORMSINV(RAND())
2. Type I error
We reject a hypothesis that is actually true
Standard error of the regression - SER = sqrt(SSR/(n - 2)) = sqrt((ei^2)/(n - 2)) SSR - Sum of squared residuals - Summation[(Yi - predicted Yi)^2] - Summation of each squared deviation between the actual Y and the predicted Y - Directly related
Non - parametric directly uses a historical dataset - Parametric imposes a specific distribution assumption
Confidence level
3. Variance(discrete)
Has heavy tails
Sum of n i.i.d. Bernouli variables - Probability of k successes: (combination n over k)(p^k)(1 - p)^(n - k) - (n over k) = (n!)/((n - k)!k!)
E[(Y - meany)^2] = E(Y^2) - [E(Y)]^2
Mean = np - Variance = npq - Std dev = sqrt(npq)
4. Single variable (univariate) probability
Expected value of the sample mean is the population mean
Variance reverts to a long run level
E[(Y - meany)^2] = E(Y^2) - [E(Y)]^2
Concerned with a single random variable (ex. Roll of a die)
5. Confidence interval (from t)
Sample mean +/ - t*(stddev(s)/sqrt(n))
More than one random variable
Variance = (1/m) summation(u<n - i>^2)
Flexible and postulate stochastic process or resample historical data - Full valuation on target date - More prone to model risk - Slow and loses precision due to sampling variation
6. BLUE
Create covariance matrix - Covariance matrix (R) is decomposed into lower - triangle matrix (L) and upper - triangle matrix (U) - are mirrors of each other - R=LU - solve for all matrix elements - LU is the result and is used to simulate vendor varia
Parameters (mean - volatility - etc) vary over time due to variability in market conditions
We accept a hypothesis that should have been rejected
Best Linear Unbiased Estimator - Sample mean for samples that are i.i.d.
7. Kurtosis
Variance of conditional distribution of u(i) is constant - T - stat for slope of regression T = (b1 - beta)/SE(b1) - beta is a specified value for hypothesis test
Dataset is parsed into blocks with greater length than the periodicity - Observations must be i.i.d.
Measures degree of "peakedness" - Value of 3 indicates normal distribution - Sigma^4 = E[(X - mean)^4]/sigma^4 - Function of fourth moment
(a^2)(variance(x)) + (b^2)(variance(y))
8. Lognormal
Nonlinearity
P(X=x - Y=y) = P(X=x) * P(Y=y)
When asset return(r) is normally distributed - the continuously compounded future asset price level is lognormal - Reverse is true - if a variable is lognormal - its natural log is normal
P(Z>t)
9. Joint probability functions
Transformed to a unit variable - Mean = 0 Variance = 1
Probability that the random variables take on certain values simultaneously
P(X=x - Y=y) = P(X=x) * P(Y=y)
Combine to form distribution with leptokurtosis (heavy tails)
10. Biggest (and only real) drawback of GARCH mode
Nonlinearity
When a distribution switches from high to low volatility - but never in between - Will exhibit fat tails of unaccounted for
Generalized Extreme Value Distribution - Uses a tail index - smaller index means fatter tails
Probability of an outcome given another outcome P(Y|X) = P(X -Y)/P(X) - P(B|A) = P(A and B)/P(A)
11. Cholesky factorization (decomposition)
Independently and Identically Distributed
Variance(x) + Variance(Y) + 2*covariance(XY)
F(x) = (1/stddev(x)sqrt(2pi))e^ - (x - mean)^2/(2variance) - skew = 0 - Parsimony = only requires mean and variance - Summation stability = combination of two normal distributions is a normal distribution - Kurtosis = 3
Create covariance matrix - Covariance matrix (R) is decomposed into lower - triangle matrix (L) and upper - triangle matrix (U) - are mirrors of each other - R=LU - solve for all matrix elements - LU is the result and is used to simulate vendor varia
12. Perfect multicollinearity
Summation((xi - mean)^k)/n
Assumes a value among a finite set including x1 - x2 - etc - P(X=xk) = f(xk)
Mean = np - Variance = npq - Std dev = sqrt(npq)
When one regressor is a perfect linear function of the other regressors
13. Poisson Distribution
Depends upon lambda - which indicates the rate of occurrence of the random events (binomial) over a time interval - (lambda^k)/(k!) * e^( - lambda)
Ignores order of observations (no weight for most recent observations) - Has a ghosting feature where data points are dropped due to length of window
Choose parameters that maximize the likelihood of what observations occurring
Generalized Auto Regressive Conditional Heteroscedasticity model - GARCH(1 -1) is the weighted sum of a long term variance (weight=gamma) - the most recent squared return (weight=alpha) and the most recent variance (weight=beta)
14. Binomial distribution equations for mean variance and std dev
Nonlinearity
Use historical simulation approach but use the EWMA weighting system
P(Z>t)
Mean = np - Variance = npq - Std dev = sqrt(npq)
15. Sample correlation
Summation(Yi - m)^2 = 1 - Minimizes the sum of squares gaps
P(X=x - Y=y) = P(X=x) * P(Y=y)
Rxy = Sxy/(Sx*Sy)
Covariance = (lambda)(cov(n - 1)) + (1 - lambda)(xn - 1)(yn - 1)
16. Two drawbacks of moving average series
X - t(Sx/sqrt(n))<meanx<x + t(Sx/sqrt(n)) - Random interval since it will vary by the sample
Normal - Student's T - Chi - square - F distribution
Has heavy tails
Ignores order of observations (no weight for most recent observations) - Has a ghosting feature where data points are dropped due to length of window
17. Panel data (longitudinal or micropanel)
Can Use alpha and beta weights to solve for the long - run average variance - VL = w/(1 - alpha - beta)
Sum of n i.i.d. Bernouli variables - Probability of k successes: (combination n over k)(p^k)(1 - p)^(n - k) - (n over k) = (n!)/((n - k)!k!)
Translates a random number into a cumulative standard normal distribution - EXCEL: NORMSINV(RAND())
Special type of pooled data in which the cross sectional unit is surveyed over time
18. Skewness
Changes the sign of the random samples - appropriate when distribution is symmetric - creates twice as many replications
Low Frequency - High Severity events
Refers to whether distribution is symmetrical - Sigma^3 = E[(x - mean)^3]/sigma^3 - Positive skew = mean>median>mode - Negative skew = mean<median<mode - if zero - all are equal - Function of the third moment
Generalized Extreme Value Distribution - Uses a tail index - smaller index means fatter tails
19. Continuously compounded return equation
Generation of a distribution of returns by use of random numbers - Return path decided by algorithm - Correlation must be modeled
Mean of sampling distribution is the population mean
Time to wait until an event takes place - F(x) = lambda e^( - lambdax) - Lambda = 1/beta
i = ln(Si/Si - 1)
20. Potential reasons for fat tails in return distributions
Simplest and most common way to estimate future volatility - Variance(t) = (1/N) Summation(r^2)
Confidence set for two coefficients - two dimensional analog for the confidence interval
Weighted least squares estimator - Weights the squares to account for heteroskedasticity and is BLUE
Conditional mean is time - varying - Conditional volatility is time - varying (more likely)
21. K - th moment
Choose parameters that maximize the likelihood of what observations occurring
More than one random variable
Summation((xi - mean)^k)/n
Depends upon lambda - which indicates the rate of occurrence of the random events (binomial) over a time interval - (lambda^k)/(k!) * e^( - lambda)
22. Test for unbiasedness
Only requires two parameters = mean and variance
E(mean) = mean
Variance = (1/m) summation(u<n - i>^2)
Instead of independent samples - systematically fills space left by previous numbers in the series - Std error shrinks at 1/k instead of 1/sqrt(k) but accuracy determination is hard since variables are not independent
23. Homoskedastic
Variance of conditional distribution of u(i) is constant - T - stat for slope of regression T = (b1 - beta)/SE(b1) - beta is a specified value for hypothesis test
(a^2)(variance(x)
Regression can be non - linear in variables but must be linear in parameters
E[(Y - meany)^2] = E(Y^2) - [E(Y)]^2
24. Key properties of linear regression
Mean of sampling distribution is the population mean
In EWMA - the lambda parameter - In GARCH(1 -1) - sum of alpha and beta - Higher persistence implies slow decay toward the long - run average variance
Regression can be non - linear in variables but must be linear in parameters
Weighted least squares estimator - Weights the squares to account for heteroskedasticity and is BLUE
25. Variance of X+Y
Reverse engineer the implied std dev from the market price - Cmarket = f(implied standard deviation)
Variance(y)/n = variance of sample Y
Based on a dataset
Var(X) + Var(Y)
26. Two ways to calculate historical volatility
Concerned with a single random variable (ex. Roll of a die)
Compute series of periodic returns - Choose a weighting scheme to translate a series into a single metric
Contains variables not explicit in model - Accounts for randomness
Population denominator = n - Sample denominator = n - 1
27. Simplified standard (un - weighted) variance
Variance = (1/m) summation(u<n - i>^2)
Least absolute deviations estimator - used when extreme outliers are not uncommon
SSR
Instead of independent samples - systematically fills space left by previous numbers in the series - Std error shrinks at 1/k instead of 1/sqrt(k) but accuracy determination is hard since variables are not independent
28. Continuous representation of the GBM
dS<t> = (mean<t>)(S<t>)dt + stddev(t)S<t>dt- GBM - Geometric Brownian Motion - Represented as drift + shock - Drift = mean * change in time - Shock = std dev E sqrt(change in time)
Least absolute deviations estimator - used when extreme outliers are not uncommon
Standard error of the regression - SER = sqrt(SSR/(n - 2)) = sqrt((ei^2)/(n - 2)) SSR - Sum of squared residuals - Summation[(Yi - predicted Yi)^2] - Summation of each squared deviation between the actual Y and the predicted Y - Directly related
P - value
29. Binomial distribution
F = [(SSR<restricted> - SSR<unrestricted>)/q]/(SSR<unrestricted>/(n - k<unrestricted> - 1)
Based on a dataset
Sum of n i.i.d. Bernouli variables - Probability of k successes: (combination n over k)(p^k)(1 - p)^(n - k) - (n over k) = (n!)/((n - k)!k!)
Historical simulation with replacement - Vector is chosen at random from historic period for each simulated period
30. Covariance calculations using weight sums (lambda)
Refers to whether distribution is symmetrical - Sigma^3 = E[(x - mean)^3]/sigma^3 - Positive skew = mean>median>mode - Negative skew = mean<median<mode - if zero - all are equal - Function of the third moment
Weights are not a function of time - but based on the nature of the historic period (more similar to historic stake - greater the weight)
Covariance = (lambda)(cov(n - 1)) + (1 - lambda)(xn - 1)(yn - 1)
Does not depend on a prior event or information
31. Marginal unconditional probability function
T = (x - meanx)/(stddev(x)/sqrt(n)) - Symmetrical - mean = 0 - Variance = k/k - 2 - Slightly heavy tail (kurtosis>3)
Normal - Student's T - Chi - square - F distribution
Standard deviation of the sampling distribution SE = std dev(y)/sqrt(n)
Does not depend on a prior event or information
32. Sample mean
E[(Y - meany)^2] = E(Y^2) - [E(Y)]^2
Standard error of error term - SER = sqrt(SSR/(n - k - 1)) - K is the # of slope coefficients
Expected value of the sample mean is the population mean
F = [(SSR<restricted> - SSR<unrestricted>)/q]/(SSR<unrestricted>/(n - k<unrestricted> - 1)
33. Standard error
Standard deviation of the sampling distribution SE = std dev(y)/sqrt(n)
Sample variance = (1/(k - 1))Summation(Yi - mean)^2
Depends on whether X and mean are positively or negatively correlated - Beta1 = beta1 + correlation(x -mean)*(stddev(mean)/stddev(x))
Flexible and postulate stochastic process or resample historical data - Full valuation on target date - More prone to model risk - Slow and loses precision due to sampling variation
34. Result of combination of two normal with same means
Dataset is parsed into blocks with greater length than the periodicity - Observations must be i.i.d.
Can Use alpha and beta weights to solve for the long - run average variance - VL = w/(1 - alpha - beta)
Combine to form distribution with leptokurtosis (heavy tails)
(a^2)(variance(x)) + (b^2)(variance(y))
35. GARCH
Conditional mean is time - varying - Conditional volatility is time - varying (more likely)
95% = 1.65 99% = 2.33 For one - tailed tests
Generalized Auto Regressive Conditional Heteroscedasticity model - GARCH(1 -1) is the weighted sum of a long term variance (weight=gamma) - the most recent squared return (weight=alpha) and the most recent variance (weight=beta)
Variance(y)/n = variance of sample Y
36. R^2
Coefficent of determination - fraction of variance explained by independent variables - R^2 = ESS/TSS = 1 - (SSR/TSS)
Variance(sample y) = (variance(y)/n)*(N - n/N - 1)
Unconditional is the same regardless of market or economic conditions (unrealistic) - Conditional depends on the economy - market - or other state
Low Frequency - High Severity events
37. Tractable
Random walk (usually acceptable) - Constant volatility (unlikely)
Standard error of the regression - SER = sqrt(SSR/(n - 2)) = sqrt((ei^2)/(n - 2)) SSR - Sum of squared residuals - Summation[(Yi - predicted Yi)^2] - Summation of each squared deviation between the actual Y and the predicted Y - Directly related
Coefficent of determination - fraction of variance explained by independent variables - R^2 = ESS/TSS = 1 - (SSR/TSS)
Easy to manipulate
38. Variance of weighted scheme
When one regressor is a perfect linear function of the other regressors
Generalized Auto Regressive Conditional Heteroscedasticity model - GARCH(1 -1) is the weighted sum of a long term variance (weight=gamma) - the most recent squared return (weight=alpha) and the most recent variance (weight=beta)
Best Linear Unbiased Estimator - Sample mean for samples that are i.i.d.
Variance = summation(alpha weight)(u<n - i>^2) - alpha weights must sum to one
39. Direction of OVB
Depends on whether X and mean are positively or negatively correlated - Beta1 = beta1 + correlation(x -mean)*(stddev(mean)/stddev(x))
Standard error of the regression - SER = sqrt(SSR/(n - 2)) = sqrt((ei^2)/(n - 2)) SSR - Sum of squared residuals - Summation[(Yi - predicted Yi)^2] - Summation of each squared deviation between the actual Y and the predicted Y - Directly related
Rxy = Sxy/(Sx*Sy)
Weights are not a function of time - but based on the nature of the historic period (more similar to historic stake - greater the weight)
40. Reliability
Statement of the error or precision of an estimate
Among all unbiased estimators - estimator with the smallest variance is efficient
Price/return tends to run towards a long - run level
Parameters (mean - volatility - etc) vary over time due to variability in market conditions
41. Empirical frequency
Based on a dataset
Mean = np - Variance = npq - Std dev = sqrt(npq)
Confidence set for two coefficients - two dimensional analog for the confidence interval
Non - parametric directly uses a historical dataset - Parametric imposes a specific distribution assumption
42. Two requirements of OVB
Does not depend on a prior event or information
Apply today's weight for yesterday's returns "what would happen if we held this portfolio in the past"
(a^2)(variance(x)) + (b^2)(variance(y))
Omitted variable is correlated with regressor - Omitted variable is a determinant of the dependent variable
43. ESS
Refers to whether distribution is symmetrical - Sigma^3 = E[(x - mean)^3]/sigma^3 - Positive skew = mean>median>mode - Negative skew = mean<median<mode - if zero - all are equal - Function of the third moment
Returns over time for an individual asset
Explained sum of squares - Summation[(predicted yi - meany)^2] - Squared distance between the predicted y and the mean of y
Combine to form distribution with leptokurtosis (heavy tails)
44. Mean(expected value)
(a^2)(variance(x)) + (b^2)(variance(y))
Discrete: E(Y) = Summation(xi*pi) - Continuous: E(X) = integral(x*f(x)dx)
Peaks over threshold - Collects dataset in excess of some threshold
EVT - Fits a separate distribution to the extreme loss tail - Only uses tail
45. Efficiency
Best Linear Unbiased Estimator - Sample mean for samples that are i.i.d.
Variance(x)
Random walk (usually acceptable) - Constant volatility (unlikely)
Among all unbiased estimators - estimator with the smallest variance is efficient
46. Variance of sample mean
Variance = summation(alpha weight)(u<n - i>^2) - alpha weights must sum to one
SE(predicted std dev) = std dev * sqrt(1/2T) - Ten times more precision needs 100 times more replications
Concerned with a single random variable (ex. Roll of a die)
Variance(y)/n = variance of sample Y
47. Logistic distribution
Simplest approach to extending horizon - J - period VaR = sqrt(J) * 1 - period VaR - Only applies under i.i.d
Observe sample variance and compare it to hypothetical population variance (sample variance/population variance)(n - 1) = chi - squared - Non - negative and skewed right - approaches zero as n increases - mean = k where k = degrees of freedom - Varia
Standard error of the regression - SER = sqrt(SSR/(n - 2)) = sqrt((ei^2)/(n - 2)) SSR - Sum of squared residuals - Summation[(Yi - predicted Yi)^2] - Summation of each squared deviation between the actual Y and the predicted Y - Directly related
Has heavy tails
48. F distribution
Regression can be non - linear in variables but must be linear in parameters
Mean = lambda - Variance = lambda - Std dev = sqrt(lambda)
Variance ratio distribution F = (variance(x)/variance(y)) - Greater sample variance is numerator - Nonnegative and skewed right - Approaches normal as df increases - Square of t - distribution has a F distribution with 1 -k df - M*F(m -n) = Chi - s
E[(Y - meany)^2] = E(Y^2) - [E(Y)]^2
49. Significance =1
Confidence level
[1/(n - 1)]*summation((Xi - X)(Yi - Y))
Price/return tends to run towards a long - run level
95% = 1.65 99% = 2.33 For one - tailed tests
50. Hybrid method for conditional volatility
Variance = (1/m) summation(u<n - i>^2)
Use historical simulation approach but use the EWMA weighting system
Apply today's weight for yesterday's returns "what would happen if we held this portfolio in the past"
F = ½ ((t1^2)+(t2^2) - (correlation t1 t2))/(1 - 2correlation)