FERM - Ch.14: Quantifying Particular Risks



Reading Source: Textbook - Financial Enterprise Risk Management

Topics Covered in this Reading:

  • Market and Economic Risk
    • Characteristics of Financial Time Series
    • Modelling Market and Economic Risks
    • Expected Returns
    • Benchmarks
    • The Black-Scholes Model
  • Interest Rate Risk
    • Interest Rate Definitions
    • Single Factor Interest Rate Models
    • Multi-Factor Interest Rate Models
    • PCA-based Approaches
    • The Black Model
  • Foreign Exchange Risk
  • Credit Risk
    • Qualitative Credit Models
    • Quantitative Credit Models
    • Credit Portfolio Models
    • The Extent of Loss
    • Credit Risk and Market Risk
  • Liquidity Risk
  • Systemic Risk
  • Demographic Risk
    • Level Risk
    • Volatility Risk
    • Catastrophic Risk
    • Trend Risk
    • Other Demographic Risks
  • Non-Life Insurance Risk
    • Pricing for High Claim Frequency Classes
    • Reserving for High Claim Frequency Classes
    • Low Claim Frequency Classes
  • Operational Risk


Can anyone help explain the Parametric VAR(99%) equation & where those numbers come from in the concept example on page 91 of the PAK ERM Core manual?


Hi there, I do not have the PAK manual so I can’t refer directly to the numbers but I can try to help answer your question anyway.

I would explain in three (hopefully) logical steps:

  1. Ask yourself what VaR is.
    VaR(99) (in this case) is the point below which 99% of the observations in the distribution occur.

  2. Understand parametric VaR.
    Parametric VaR means that we estimate this point using the parameters of the distribution. Suppose you have a standard normal distribution - mean =0, standard deviation = 1. The VaR(99) of this distribution would simply be the alpha value in the equation, i.e. 2.326 (this value agrees with the standard normal distribution table at the 99th percentile).

  3. Scale the VaR
    Despite not having the numbers, I can tell this is not a standard normal distribution (correct me if I am wrong) - so we need to “scale” the VaR to fit the distribution. We can do this with three factors:

W is the wealth or size of portfolio (usually…not sure here because I don’t have reference material). In this case it should be 1 which is why you see the “W” term disappear from the equation once the numbers are added.
Sigma is the actual standard deviation of the distribution. In the question, the variance of the distribution is probably given as 6.4988 (which implies sigma = sd = sqrt(6.4988)).
Sqrt(t) is the time component…someone can correct me if I am wrong, but to derive the sqrt relation, I think of an example of two years:
-Suppose you have a variable X that models one year of returns and that is normally with mean mu and variance sigma^2
-Over two years, your returns would also be normally distributed --> X+X ~ N(2mu, 2sigma^2) since a sum of independent normal variables is normal [This assumes years are independent which they would likely not be but I think that’s just an assumption that this method applies].
-Since the variance of your two-year return is 2sigma^2 your two-year std.dev is sqrt(2sigma^2)=sqrt(2)*sigma which is where this piece comes from in your equation.

So from that we get


I hope that helps a bit.


Hi there,

I am not sure if I am supposed to start a new thread for a new topic. Please let me know if that’s the case…

Anyway, on page 318 of this book (1st edition), the author lists the negative correlation between credit spreads and interest rates as one of the reasons why the credit spreads have historically been far higher than the spread needed to compensate an investor for the risk of defaults. I don’t quite get this. Can someone elaborate on this?



Hi everyone,

Does anyone know what the KMV model exactly is?
It is defined as using the capital structure of the firm to estimate the probability of default, but the textbook only gives us the distance to default. The question is what the distance to default is and how can we use it to calculate the probability of default.

Any thoughts will be appreciated!


Kealhofer, McQuown, and Vasicek (hey, Vasicek, that’s a name you might recognize!) came up with a model that was meant to take some of the new financial tools that were coming about after the creation of the Black-Scholes model, and apply it to corporations and their debt.

The key insight they came up with was that you could treat a firm sort of like a call option, with a strike price equal to their debt. If their net assets are above that, the call option has value - if not, then it has no value (and they default). To measure this, they came up with a metric called “distance to default” which looks VERY similar to something you’d see in the Black-Scholes model.

But, the key here is that distance to default doesn’t really “mean” anything. At least, not in the traditional sense. You can’t take a default distance of 1.5 and say “This means this firm will default in 1.5 years, or is 1.5 times more likely to default than another.”

But, what they found was that, while the absolute value of this number was kind of meaningless, it was a good predictor of default. Firms with a bad value defaulted a lot, and firms with a good value defaulted less. So while the number doesn’t really tell you anything by itself, it is an index that you can use to analyze historical results, and that can tell you about future results. Maybe, historically, we find that a firm with a default distance of 1.5 defaults 7% of the time. We can estimate that this will hold going forward as well.


Hi Nash,

The idea here is of an investor trying to achieve a certain return. Let’s say 5%. Perhaps the investor can get this with a corporate bond that has a 2% risk free rate a 3% spread -> this will achieve the 5%. Now, because spreads and interest rates have been historically negatively correlated, when one goes up, the other goes down. So even if the risk free rate goes down to 1%, maybe the spread goes to 4%. Or if the risk free rate goes up to 3%, maybe the spread goes to 2%. It’s unlikely it will be exactly offsetting changes, but the point is that being negatively correlated allows for a greater likelihood that changes will somewhat offset and your anticipated return of 5% will still be met. This assurance through diversification is one argument for why credit spreads should be lower as the diversification benefits are an attractive offering to investors.

Of course, there were also many other reasons given in the article to support the idea that credit spreads have been traditionally “too high”.


Thank you very much Paul,

May I ask a further question?
As your said, distance to default means nothing, but I think the greater value it is, the lower probability of default will be, and distance to default will never be negative as long as the firm does not default, is that correct?

Additionally, the textbook defines the distance to default as (X0 - B)/ (X0* σX) with adjusted debt value B, assets value X0 and volatility σX. Then the KMV model derives a value for X0 and σX from the quoted value of a firm’s equity rather than assuming that they are directly observable. What does this sentence mean?:thinking:

At last, distance to default are calculated for thousands of companies, and they are mapped to the probability of default. How could these number can be mapped to the probability of default?

Any help will be appreciated!:sweat_smile::sweat_smile:



I have a couple of questions about bonds and interest rate models:

  1. This chapter talks about the different rates used to discount bonds. I understand spot rates and forward rates, but was hoping someone could explain what the gross redemption yield is?
  2. What is the point of interest rate models (Vasieck, CIR) and what are they used for? Are they used to model the risk-free rate? Do they model the entire yield curve or just single points in time?



This chapter talks about the different rates used to discount bonds. I understand spot rates and forward rates, but was hoping someone could explain what the gross redemption yield is?

The gross redemption yield is essentially the yield to maturity. That is: it is the rate of return earned on a bond over the entire life of the bond. To determine this, you would need to solve for r such that:

P(bond) = Coupon/(1+r)+Coupon/(1+r)^2+…+(Coupon + Maturity Amount)/(1+r)^n

Note: The gross redemption yield doesn’t take into account taxes or costs associated with buying/selling the bond.

What is the point of interest rate models (Vasieck, CIR) and what are they used for? Are they used to model the risk-free rate? Do they model the entire yield curve or just single points in time?

These models are used to simulate forward rates over time, which you can then derive an entire spot yield curve from. Each model has a different way of modelling forward rates


Can someone explain the PCA approach and how we use it to model market and economic risks?


I’ve only briefly looked at this material so far… but from my understanding, it basically gives you a “duration analysis” but with more information.

For example, using traditional duration analysis, we would have a bond value and a duration. And then any changes to interest rates would just be multiplied by the bond duration. This is to give an idea of sensitivity of an asset (in this case, a bond) to changes in interest rates. But this is only helpful when analyzing a flat change in the interest rates.

With PCAs, you have a sensitivity amount at every duration for a given “component”. I think I’ve heard of components such as a flat shift up, a steepening of the yield curve, a “twisting” of the yield curve. Those could be three possible components. And with PCAs, you could determine the sensitivity to each of these possible components at all the different durations. The power to model specifically what would happen at each time due to various different impacts (whether it be a flat shift or something different) has been expanded greatly with PCAs.

This is my high level understanding though, I’m not sure I really understand how these components are chosen, or how exactly they would be modeled. Not sure if anyone else has gotten further along with this reading yet


Hi Everyone,
In 14.2.4, it mentions correlation between benchmark, portfolio, and market return.
Could anyone kindly explain why
• the correlation between rX −rU and rB −rU should be strongly positive;
• the correlation between rX −rB and rB −rU should be close to zero (Is it because the return of portfolio and benchmark is very similar?)

In this list, rU is the market return, rB is the benchmark return and rX is the portfolio return.

Thanks a lot.


Hi @Lyra

I look at it this way.

  1. rX-rU is the difference between portfolio return and the market return. rB-rU is the difference between the benchmark portfolio return and the market return. The whole point of the benchmark portfolio is to act very closely to the portfolio return. So rX-rU should always be similar to rB-rU. If these values are always similar, that means they move together, so they are strongly positively correlated.

  2. rX-rB should be close to zero at all times because they are ideally trying to represent the same thing. rB-rU is not always going to be zero as it is the difference between the benchmark and the market return. This value will fluctuate. But as this value fluctuates, rX-rB should still remain close to 0. This tells us that as one value moves, the other value doesn’t move at all. This would indicate a correlation of 0 because the movements are not related to each other at all.

Does this make sense? Or is it still a little unclear?


Hi Andrew,
Thanks a lot for your clear explanation.