News & Blogs

VALUE AT RISK A PLACEBO EFFECT?

Value at Risk (VaR) is a measure of the risk of investments. It estimates how much a set of investments might lose, given normal market conditions, in a set time period such as a day. VaR is typically used by firms and regulators in the financial industry to gauge the amount of assets needed to cover possible losses.

A VaR statistic has three components: a time period, a confidence level and a loss amount (or loss percentage). If a portfolio worth $1million has a one-day 5% VaR of -0.82% (or loss of $8.2k) , that means that there is a 5% probability that the portfolio will fall in value by more than $8.2k over a one-day period if there is no trading.

The History

First attempts to measure risk and thus express potential losses in the portfolio, are attributed to Francis Edgeworth and dates back to 1888. He made important contributions to the statistical theory, advocating the use of data from past experience as the basis for estimating future probabilities.

The origins of VaR can be further traced to capital requirements for US securities firms of the early 20th century, starting with an informal capital test the New York Stock Exchange (NYSE) first applied to member firms around 1922. The original NYSE rule6 required firms to hold capital equal to 10% of assets comprising proprietary positions and customer receivables.

 

History of the VaR continued in 1945, when Dickson H. Leavens created a work that is considered the first mention of VaR-like risk measure although he did not use name value at risk. He attempted to measure the value of the portfolio of ten independent government bonds which would either reach maturity amount of $1,000 or become worthless. He mentioned the notion of "the spread between the likely profit and loss" and that most likely mean standard deviation, which is used to measure risk and is an important part of VaR.

In 1975, the US Securities and Exchange Commission (SEC) established a Uniform Net Capital Rule (UNCR) for US broker-dealers trading non-exempt securities. This included a system of “haircuts” that were applied to a firm’s capital as a safeguard against market losses that might arise during the time it would take to liquidate positions .Volatility in US interest rates motivated the SEC to update these haircuts in 1980. The new haircuts, a VaR like metric, were based upon a statistical analysis of historical market data. They were intended to reflect a 95% -quantile of the amount of money a firm might lose over a one-month liquidation period.

The credit for the use of current VaR is attributed mainly to US investment bank JP Morgan. In 1994, its chairman, Dennis Weatherstone asked for something simple that would cover the whole spectrum of risks faced by the bank for the next 24 hours. The bank developed, using Markowitz portfolio theory, the VaR. But at that time it was called 4:15 report, because it was handed out every day at 4:15, just after the market closed. It allowed him to see what every desk’s estimated profit and loss was, as compared to its risk, and how it all added up for the entire firm. However, the origin of the name “Value at risk” is unknown. JP Morgan formed a small group, called RiskMetrics, that published a technical document of this system and also posted it on the Internet so that other risk experts could make suggestions to improve it (aka Open Source Code). This was followed by the mass acquisition of the system by many institutions. VaR was popularized as the risk measure of choice among investment banks looking to be able to measure their portfolio risk for the benefit of banking regulators.

In 1996; the Basel Committee approved the limited use of proprietary value-at-risk measures for calculating the market risk component of bank capital requirements. In this and other ways, regulatory initiatives helped motivate the development of proprietary value-at-risk measures.

VaR Methods

Although various models for the calculation of VaR use different methodologies, all retain the same general structure, which can be summarized in the following steps: (i) The calculation of the present value of the portfolio (Mark-to-Market Value), which is a function of the current values of market factors (interest rates, exchange rates and so on), (ii) An estimation of the distribution of changes in the portfolio (this step is the main difference among the VaR methods) and (iii) The calculation of the VaR.

Historical Simulation

A relatively simple method where distributions can be non-normal, and securities can be non-linear. Historic approach involves keeping a historical record of preceding price changes. It is essentially a simulation technique that assumes that whatever the realizations of those changes in prices and rates were in the earlier period is what they can be over the forecast horizon i.e. assumes that the past will be repeated. It takes those actual changes, applies them to the current set of rates, and then uses those to revalue the portfolio. Profits and losses are sorted by size from the largest loss at one end to highest profit at the other end of the distribution. Then we choose from the end of losses the pre-set percentage. In practice, it tends to have a higher average value of the distribution compared to normal distribution. Also, common financial data has a fat tail, which means that the probability of extremely large positive as well as extremely large negative values is higher than in the normal distribution.

Analytical Approach

Analytical VaR has also other names such as Variance-Covariance VaR, Parametric VaR Linear VaR or Delta Normal VaR. This method was introduced in the RiskMetrics™ system. This method consists of going back in time and computing variances and correlations for all risk factors. Portfolio risk is then computed by a combination of linear exposures to numerous factors and by the forecast of the covariance matrix. For this method, positions on risk factors, forecasts of volatility, and correlations for each risk factor are required. Analytical approach is generally not appropriate to portfolios that hold non-linear assets such as options or instruments with embedded options such as mortgage-backed securities, callable bonds, and many structured notes.

After selecting the parameters for the holding period and confidence level it is possible to calculate 1-day VaR by a simple formula. Prerequisite for the use of this formula is the assumption that the change in the value of the portfolio is subject to normal distribution:

VaR1-day(α) = Z(α) * σ * Asset Value

Where:

α is the level of confidence,

σ is the standard deviation of changes (volatility) in the portfolio over a given time horizon

Z is the normal distribution statistic for a given level of confidence (α)

It is possible to calculate the T-day VaR by multiplying the 1-day VaR by the square root of T, where T represents the new holding period:

VaRT-days(α) = Z(α) * σ * Asset Value * (T)1/2

The overall VaR of a portfolio of 2 assets (a & b) is not a simple sum of the individual VaR:

VaR2portfolio = wa2 * VaRa2 + wb2 * VaRb2 + 2 * wa * wb * σa * σb * ρab

Analytical VaR of a portfolio of n>2 assets is somewhat more complex:

Where:

x = (VaR1, VaR2, … , VaRn-1, VaRn) - The vector of VaR of each asset in portfolio.

ρi j - The correlation between the ith and jth asset.


Monte Carlo

It is widely regarded as the most sophisticated VaR method and can be used when previous methods cannot be used in cases when a portfolio is characterized by fat tails or is too heterogeneous or historical data are not available. Monte-Carlo method makes some assumptions about the distribution of changes in market prices and rates. Then, collects data to estimate the parameters of the distribution, and uses those assumptions to give successive sets of possible future realizations of changes in those rates. This method based on the assumption that the risk factors that affect the value of the portfolio (or asset) are managed by a random or stochastic process (an example shown below). The random process is simulated many times (e.g., 10,000 times). The result is a simulated distribution of revalued portfolio (or asset price), as in the historic method, and the outcomes are ranked and the appropriate VaR is selected.The more simulations, the resulting distribution is more accurate. Monte Carlo method can easily be adjusted according to the distribution of risk factors. However it is computationally burdensome which constitutes a problem for routine use. It takes hours or even days to run those analyses, and to speed up analyses complicated techniques such as variance reduction need to be implemented.

St+∆t = St * exp [(µ - σ22)* ∆t + σ * (∆t)1/2 * ℇ]

S: Asset price; t: time period; ∆t: change in time; µ: expected growth rate in asset price;

σ: price volatility of asset at time t; ℇ: random variable from a standardized normal distribution

LTCM event – VaR exposed

In 1998, Long Term Capital Management - LTCM, the world’s largest hedge fund, had became one of the most highly leveraged hedge funds in history. It had a capital base of $3 billion, controlled over $100 billion in assets worldwide, and possessed derivatives whose value exceeded $1.25 trillion. The fund’s investment strategy which relied heavily upon the convergence-arbitrage and so had to have a high level of leverage in order to meet a satisfactory rate of return. LTCM believed that historical trends in securities movements were an accurate predictor of future movements. Their faith in this belief led them to sell options in which the implied volatility was higher than the historical volatility. There was an assumption that the portfolio was sufficiently diversified across world markets to produce low correlation. But in most markets LTCM was replicating basically the same credit spread trade.

 

To predict and mitigate its risk exposures, LTCM used a combination of different VaR techniques. LTCM claimed that its VaR analysis showed that investors might experience a loss of 5% or more in about one month in five, and a loss of 10% or more in about one month in ten. Only one year in fifty should it lose at least 20% of its portfolio. LTCM also estimated that a 45% drop in its equity value over the course of a month was a 10 standard deviation event. In other words, this scenario would never be likely to occur in the history of the universe. Unfortunately for Long-Term Capital Management and its investors, this event did happen.

In August 1998, an unexpected non-linearity occurred that was beyond the detection scope of the VaR models used by LTCM. Russia defaulted on its sovereign debt, and liquidity in the global financial markets began to dry up as derivative positions were quickly slackened. All trades which were assumed to be independent i.e. low correlation turned south together, thereby raising correlations and eliminating diversification benefits just at the moment when they were most needed. So sure were the firm’s partners that the market would revert to “normal” — which is what their model insisted would happen — that they continued to take on exposures that would destroy the firm as the crisis worsened. The LTCM VaR models had estimated that the fund’s daily loss would be no more than $50 million of capital. However, the fund soon found itself losing around $100 million every day. In the fourth day after the Russian default, they lost $500 million in a single trading day alone. As a result LTCM began preparations for declaring bankruptcy.

The US Federal Reserve, fearing that LTCM’s collapse could paralyze the entire global financial system due to its enormous, highly leveraged derivatives positions, extended a $3.6 billion bailout to the fund, creating a major moral hazard for other adventurous hedge funds. Consequently, LTCM’s failure can be attributed to VaR.

Not a Panacea for Risk Management

Past is not the Future

Unfortunately, the past is not a perfect indicator of the future. On October 18, 1987, for example, two-month S&P futures contracts fell by 29%. Under a lognormal hypothesis, with annualized volatility of 20% (approximately the historical volatility on this security), this would have been a –27 standard deviation event. In other words, the probability of such an event occurring would have been 10-160. This is such a remote probability that it would be virtually impossible for it to happen . On October 13, 1989 the S&P 500 fell about 6%, which under the above assumptions would be a five standard deviation event. A five standard deviation event would only be expected to occur once every 14,756 years. Hence VaR that uses history or known assumptions based on past patterns are not a full-proof measure as people tend not to be able to anticipate a future they have never personally experienced. Prior to 2008 debacle, all the triple-A-rated mortgage-backed securities churned out by Wall Street firms and that turned out to be little more than junk because VaR generally relied on a tame two-year data history to model a wildly different environment. It’s like the historic data only has rainstorms and then a tornado hits. “The years 2005-2006, which were the culmination of the housing bubble weren’t a very good universe for predicting what happened in 2007-2008”: this was one of Alan Greenspan’s primary excuses when he made his mea culpa for the financial crisis before Congress.


A false sense of security

Since the financial crisis of 2008, there has been a great deal of talk, even in quant circles, that this widespread institutional reliance on VaR was a terrible mistake. At the very least, the risks that VaR measured did not include the biggest risk of all: the possibility of a financial meltdown of 2008 or any financial crises of the preceding years. VaR has been relatively ineffective tool as a risk-management tool for firms to side-step potentially catastrophic moments in history. It usually creates a false sense of security among senior managers and watchdogs just like an air bag (of a car) that works all the time, except when you have a car accident. Regulators sleep soundly in the knowledge that, thanks to VaR, they have the whole risk thing under control. Even FI boards who hear a VaR number once or twice a year and if it sounds good are lulled into sleep. It is the placebo effect at work where people like to have one number they can believe in.


Can be gamed

It turns out that VaR could be gamed as it creates a perverse incentive to take more risks amongst banks reporting their VaRs. To motivate managers, the banks began to compensate them not just for making big profits but also for making profits with low risks (seemingly). Thus, managers began to manipulate the VaR by loading up asymmetric risk positions. These are products or contracts that, in general, generate small gains and very rarely have losses. But when they do have losses, they are huge. A good example is a credit-default swap, which is essentially insurance that a particular company won’t default. The gains made from selling credit-default swaps are small and steady — and the chance of ever having to pay off that insurance was assumed to be minuscule. It was outside the 99 percent probability, so it didn’t show up in the VaR number. Thus VaR gives cover to such trades that make slow, steady profits — and then eventually quickly spiral downward for a giant, brutal loss.

 

Blind to BlackSwans

Nassim Nicholas Taleb propounded ‘black swans’ as unexpected events of large magnitude and consequence and their dominant role in history. Such events, considered extreme outliers, collectively play vastly larger roles than regular occurrences. Risk management tools like VaR cannot credibly gauge the kind of extreme events that destroy capital and create a liquidity crisis — precisely the moment when you need cash on hand. The essential reason for this is that the greatest risks are never the ones you can see and measure, but the ones you can’t see and therefore can never measure. The ones that seem so far outside the boundary of normal probability that you can’t imagine they could happen in your lifetime — even though, of course, they do happen. The experience of LTCM is a case in point. More recently the Best Picture faux pas at the Oscars was also a black swan event.

Leverage ignored

VaR does not properly account for leverage that was employed through the use of options. For example if an asset manager borrows money to buy shares of a company, the VaR would usually increase. But say he instead enters into a contract that gives someone the right to sell him those shares at a lower price at a later time — a put option. In that case, the VaR might remain unchanged. From the outside, he would look as if he were taking no risk, but in fact, he is.

Liquidity risk not Measured

One of VaR’s flaws, which only became obvious in 2008 financial crisis, is that it didn’t measure liquidity risk — and liquidity crisis is exactly what banks encounter in the middle of a financial downturn. One reason nobody seems to know how to deal with this kind of crisis is because nobody envisions the dynamics of a liquidity imbroglio and VaR doesn’t either. In war you want to know who can kill you and whether or not they will and who you can kill if necessary. You need to have an emergency backup plan that assumes everyone is out to get you. In peacetime, you think about other people’s intentions. In wartime, only their capabilities matter whereas VaR is a peacetime statistic.


User determines VaR Potency

However, VaR is just the messenger and people interpreting the message are the source of the problem. A computer does not do risk modelling but people do it. Therefore laying the blame at the doorstep of a mathematical equation seems trivial. You can’t blame math, just as it is not the car it is the guy behind the wheels who dictates whether he is going to meet with an accident. An incident at Goldman Sachs prior to 2008 crisis makes the point.

 

Reporters wanted to understand how Goldman had somehow sidestepped the disaster of 2008 that had befallen everyone else. What they discovered was that in December 2006, Goldman’s various indicators, including VaR and other risk models, began suggesting that something was wrong. Not hugely wrong, but wrong enough to warrant a closer look. In December Goldman’s mortgage business lost money for 10 days in a row. So Goldman called a meeting of about 15 people, including several risk managers and the senior people on the various trading desks to get a sense of their gut feeling. A decision was made, after getting an all round sense that it could get worse, that it was time to rein in the risk. The bank got rid of the mortgage-backed securities or hedging the positions so that if they declined in value, the hedges would counteract the loss with an equivalent gain. And that’s why, back in the summer of 2007, Goldman Sachs avoided the pain that was being suffered by Bear Stearns, Merrill Lynch, Lehman Brothers and the rest of Wall Street. Goldman Sachs acted wisely by reading the faint cautionary signals from their risk models and making decisions on more subjective degrees of belief about an uncertain future.

Conclusion

VaR is a useful risk management tool when the numbers seem off or when it starts to miss targets on a regular basis. It either means that there is something wrong with the way VaR is being calculated, or it means the market is no longer acting normally. It tells you something about the direction risk is going and should caution risk managers to be on the alert.

VaR worked for Goldman Sachs the way it once worked for Dennis Weatherstone — it gave the firm a signal that allowed it to make a judgment about risk. It wasn’t the only signal, but it helped. It wasn’t just the math that helped Goldman sidestep the early decline of mortgage-backed instruments. But it wasn’t just judgment either – rather it was both. The problem on Wall Street at the end of the housing bubble is that all judgment was cast aside whereas the math alone was never going to be enough.

In the end: Nothing ever happens until it happens for the first time.