Observing Market Risk

Why does every decade produce a once-in-twenty-year event? Because Statistics support this result.

Owing to a series of actions taken in the public and private sectors beginning in the early 1980s, Value-at-Risk (VaR) became and has remained the archetypal metric for analyzing Market Risk, the risk of loss due to changes in market prices. Because of the highly uncertain price dynamics in financial markets, a statistical approach is a natural choice. Despite its longevity, it is hardly perfect.

Each VaR method has its own characteristics, proponents often citing simplicity for Analytical, realness for Historical, and adaptability for Monte Carlo. Conventional wisdom however faults all implementations for their dependence on the future resembling the past. Many also criticize widespread reliance on the Normal distribution (although popular, such an assumption is not necessary).

Missing from the ongoing discussion is a universal understanding of statistical implications. What is the true meaning of a conventional VaR statement regarding an x% probability of a loss greater than r over some period t? Do “state-of-the-art” techniques employ simplifying assumptions of which Risk Management professionals are not even aware? The question posed at the beginning suggests a failing of VaR. Given the drastic difference between 10- and 20-year timeframes, non-Normality and historical dependency make for obvious (and easy) choices as culprits.

Even if the distribution employed by VaR analysis is accurate AND future market moves perfectly resemble the past, such supposed once-in-twenty-year events should be expected every decade. Many finance professionals are not aware of this reality.

Sand Key Research’s patent-pending method provides a view of Market Risk consistent with the experience of investors.