Impact case study database
Search and filter
Filter by
- The London School of Economics and Political Science
- 10 - Mathematical Sciences
- Submitting institution
- The London School of Economics and Political Science
- Unit of assessment
- 10 - Mathematical Sciences
- Summary impact type
- Economic
- Is this case study continued from a case study submitted in 2014?
- No
1. Summary of the impact
An innovative method for estimating extreme quantiles of multiple random variables was developed at the LSE in collaboration with investment bank Barclays. Its application to the management of the tens of billions of US-dollars’ worth of counterparty risk-weighted assets (RWA) held by Barclays has helped the bank meet the requirements of Basel III. Specifically, a new methodology for future counterparty RWA evaluation, in which the extreme quantile estimation method plays a key role, has enabled the bank to calculate an appropriate capital reserve to protect customers’ interests as well as its own business in an effective and efficient manner, avoiding holding excessively large additional capital. This reduces the cost of borrowing and contributes positively to investment and economic growth. Barclays’ new methodology for future counterpart RWA evaluation has withstood backtesting under the Basel III framework since its inception in November 2013.
2. Underpinning research
The research underpinning impacts described here arose in the context of an ongoing research programme within LSE’s Department of Statistics on statistical inference for time series and complex dependent data. The specific underpinning research (published in **[1]**) was directly motivated by a backtesting problem in financial risk management. Between January 2012 and January 2014, Professor Qiwei Yao (with Dr Jinguo Gong, at that time a visiting scholar at LSE) worked with the then-Director of Quantitative Exposure at Barclays to develop an improved backtesting methodology for the bank. They particularly sought to establish a more reliable and robust method to estimate potential future exposure to counterparty credit risk. Yao was invited to join the project on the basis of his established expertise in relevant areas of statistical analysis, especially in time series and dependent data (see, for example, **[2]**).
Backtesting is a primary analytical tool used by banks and their regulators to monitor the performance of risk factor valuation methods adopted by banks. It uses historical data showing realised prices to (back)test the efficacy of existing risk factor models. This is often facilitated by testing whether the extreme quantiles of potential future exposure (PFE) under those models are correctly quantified. PFE refers to the maximum expected credit exposure during the lifetime of transactions with a prespecified probability. It is a key metric to measure counterparty credit risk (CCR) - the risk of suffering a loss because another party to a contract fails to meet its side of the deal.
The figure below provides a simple illustration of the backtesting setup for a price model of a term structure asset; that is, an asset which can be traded with different time maturities at different prices. Herein, a realised price path (i.e. the actually traded prices at different maturities) is represented by X1, X2, …, Xp and the solid curves represent the price distributions at different time maturities determined by a price model.
Backtesting is a method for assessing whether the realised price path is an extreme event with very small probability, say, 0.01%, under those price distributions. If so, this would strongly suggest a mis-specified price model and a “red light” would be designated. Two complicating factors make backtesting difficult:
the interdependency of prices at different time horizons; and
the explicit unavailability of price distributions.
Although the distributions are not available, banks store 1,000 simulated price paths as a proxy for them, allowing backtesting based on a comparison of these with a realised price path. (Note that due to various constraints most banks, including Barclays, can only store 1,000 simulated price paths.) Typically, banks take ad hoc approaches to calculating extreme PFE quantiles. Usually this involves raising the PFE profile according to the number of prices exceeding, say, the 98% PFE at each time horizon; raising the PFE at different time horizons together in order to offset the correlations/associations among the prices at different time horizons; and using simulation models which impose various unrealistic conditions. Those ad hoc approaches cannot be justified theoretically, leading to incorrect and sometimes excessively conservative estimation.
A sensible approach to mitigating the difficulties caused by the multiple distributions along different time horizons is to use some appropriate risk metrics which are function of X1, X2, …, Xp. The challenge, then, is accurately finding the extreme quantiles of those risk metrics with only 1,000 observations, because those quantiles have to be so extreme that they can only occur with probability between 0.05% and 0.01%. The standard approach to achieving this is to appeal to extreme value theory (EVT). Unfortunately, this involves the selection of tuning parameters such as the proportion of extreme-valued data points to be used in estimation. While the asymptotic properties of the tuning parameters are well understood, the methods to choose them in practice are ad hoc; leading to estimates which are too unreliable to be used in evaluating extreme PFE.
In late 2013, in collaboration with Barclays QA Exposure Analytics team, Yao began to tackle the challenge of estimating extreme PFE quantiles based on the small available samples of just 1,000 simulated price paths. The new method developed takes advantage of the fact that the extreme quantiles required are determined by multiple random variables (i.e. X1, X2, …, Xp). The key idea here is that it is not necessary to go to extremes along any component variable in order to observe the joint extreme events. A risk metric can therefore fall into the region of extreme values without any of X1, X2, …, Xp actually taking extreme values. This seemingly counter-intuitive observation is central to the success of the new approach which, despite being readily demonstrable, had never previously been explored in either extreme value inference literature or in practice.
The resulting method, described in [1], provides a satisfactory solution to quantify the extreme quantiles of PFE accurately and reliably. The method is conceptually simple, easy to implement, and involves no tuning parameters. It provides robust performance in practice. Based on the key observation above, the new method fits a joint distribution of multiple random variables X1, X2, …, Xp within their observed range based on a vine-copula structure which captures the term structure in the data. It then draws a large bootstrap sample from the fitted joint distribution and uses the sample (extreme) quantiles as the estimates for the required quantiles. (Note that the bootstrap sample space from a sample of size n of p-variables is in the order of n to the p-th power). This new method is backed up by appropriate asymptotic theory (see **[1]**).
Yao was the main creator of the new methodology. Gong conducted the numerical experiments. Their partner at Barclays provided the background, contributed to the development of the methodology, and was responsible for the case study reported in [1]. Liang Peng (Professor of Risk Management and Insurance at Georgia State University) was brought in at a later stage to provide the expertise on EVT required for establishing the asymptotic theory for publication [1].
3. References to the research
[1] Gong, J., Li, Y., Peng, L., and Yao, Q. (2015). Estimation of extreme quantiles for functions of dependent random variables. Journal of the Royal Statistical Society, Series B, 77(5), pp. 1001-1024. DOI: 10.1111/rssb.12103.
[2] Fan, J. and Yao, Q. (2003). Nonlinear Time Series: Nonparametric and Parametric Methods. Springer. ISBN: 9780387693958.
4. Details of the impact
The Basel III framework is an internationally agreed set of measures intended to strengthen the regulation, supervision, and risk management of banks, in response to weaknesses exposed by the financial crisis of 2007-2009. It requires banks and other financial institutions to apply a backtesting procedure to various market risk factors, trade, and portfolio prices.
The challenges of managing counterparty credit risk: one of the requirements of Basel III is enhanced management of counterparty credit risk (CCR) [A]. This makes CCR backtesting a mandatory requirement for all banks with advanced model approval for CCR, including Barclays. Basel III also requires investment banks to hold adequate capital and liquidity to cover the CCR, which is the potential loss in derivative positions due to the default of trading counterparties. The reason for this strong emphasis on banks’ proper management of CCR is that it is important to the stability not only of individual banks but also, in light of the interconnections between them, to the financial system as a whole. The failures of Lehman Brothers and MF Global, and their impacts on the global financial system, illustrate the disruptive potential of instability in any part of the system.
However, CCR is a complex risk to assess; as a hybrid of credit and market risk, it is contingent both on changes in the counterparty’s creditworthiness and on movements in underlying market risk factors. The first step in computing CCR capital is typically to jointly simulate various future market risk factors such as interest rates, equities, and foreign exchange rates. Next, all the derivative positions of the bank are computed at each time horizon of each of these simulated market scenarios, to determine the bank’s potential future exposure (PFE) to counterparty default. The amount of holding capital required to cover CCR is then calculated, based on PFE and according to the relevant regulation.
The role of backtesting: backtesting is a critical component of the Basel III regulation, particularly as a means of assessing the extreme PFE quantiles used by banks and financial institutions to validate price models and to calculate their credit holding. The ad hoc approaches typically taken to calculating these extreme quantiles are inaccurate. Underestimating the extreme quantiles leads to the exposure of both banks and their customers to potential uncovered financial losses. Conservative estimation leads to additional unnecessary overheads and, consequently, to increases in the costs of borrowing and decreases in investment.
Implementing the new backtesting method at Barclays: Barclays work with Yao on the development of the new method for estimating extreme quantiles published in [1] has changed and improved important aspects of their approach to backtesting. Specifically, the new method has helped Barclays to:
“overcome the difficult technical challenges associated with backtesting uncollateralised portfolio…This allows Barclays to backtest the potential future exposure to counterparty default based on a theoretically sound method for the very first time.” [B]
That new method “ has been used as the official production method to backtest Barclays’ CCR exposure RWA [risk weighted assets] since November 2013” [B, further confirmed in C]. As a result, it is now “ one of the key components for backtesting our large CCR RWA for uncollateralised portfolio day-to-day” [B, further confirmed in C].
Reducing the cost of borrowing by more accurately calculating an appropriate capital buffer: Barclays holds CCR RWA worth tens of billions of US dollars (commercial sensitivity prevents Barclays from releasing the exact figure). The new methodology is applied across that portfolio, allowing the bank to calculate an adequate capital buffer to protect both its own and its customers’ interests by avoiding the need to hold excessively large additional capital. The direct saving from this new scientifically calculated buffer is substantial. This, in turn, also reduces the cost of borrowing, and potentially increases investment and economic growth, with substantial indirect benefits to society.
Helping Barclays to meet regulatory requirements and improve its CCR management: since its first use by Barclays in late 2013, the new method has withstood rigorous backtesting under the Basel III framework. The Managing Director and Head of Cross Product Modelling of Barclays explains:
“The new methodology has helped Barclays to meet [the] regulator’s requirements and better manage and control our counterparty risk…[because it] has helped Barclays to demonstrate quantitatively that we hold an adequate amount of capital to manage our counterparty credit risk.” [B]
This has helped to ensure that both Barclays and its customers have avoided exposure to highly risky positions. Had its backtesting failed, the bank would have been required to re-adjust its CCR RWA evaluation by increasing its estimates of CCR exposure. That increase would, in turn, demand that they hold additional capital add-ons, a requirement that would be extremely costly given the large size of the bank’s counterparty RWA. Commercial confidentiality also prevents Barclays from releasing its backtesting methodology documentation or related quantitative backtesting results. However, the bank’s Head of Cross Product Modelling writes:
“I am happy to provide this letter to recognise Professor Yao’s contribution to our CCR backtesting methodology, which has helped Barclays to improve our Counterparty Credit Risk management.” [B].
Wider benefits of improved CCR management: the proper management of exposure to CCR based on the new method underpinned by [1] delivers several important benefits.
First, it improves the overall stability of Barclays in the sense that both its customers’ and the bank’s own interests are protected by alleviating exposure to uncovered high risky positions within a small probability (such as 0.05% or 0.01%).
Secondly, it contributes to the stability of the global financial system as a whole by mitigating the potential impact of the failure of one bank on others. The collapse of Lehman Brothers in 2008 underscored the need for better protections of this sort within a highly interconnected global financial system.
Thirdly, it increases the bank’s confidence in lending via a reduction of the cost of borrowing. This results from the fact that calculating the extreme PFE based on the new method helps the bank more accurately estimate an adequate capital buffer. In other words, the method allows banks to cover a given degree of risk with a smaller capital requirement. Given that Barclays holds CCR RWA worth tens of billions of US dollars, the direct saving from this new and scientifically calculated buffer is substantial. This substantial saving improves the bank’s business and operations and encourages more investment.
Yao’s innovative contribution in estimating extreme PFE quantiles is an indispensable part of the new backtesting method directly supporting these benefits. As such, the research has contributed indirectly to assuring greater financial system security at lower cost, with wider economic benefits.
5. Sources to corroborate the impact
[A] Basel III: A global regulatory framework for more resilient banks and banking systems, Basel Committee on Banking Supervision. Revised version June 2011. See particularly Part D, Section II.
[B] Supporting statement from the Managing Director and Head of Cross-Product Modelling at Barclays, 10 July 2019.
[C] Supporting statement from the Director and Head of the Risk Models Group (Risk Management) and the Director and Head of Counterparty Credit Risk (Quantitative Analytics) at Barclays, 9 July 2019.
- Submitting institution
- The London School of Economics and Political Science
- Unit of assessment
- 10 - Mathematical Sciences
- Summary impact type
- Economic
- Is this case study continued from a case study submitted in 2014?
- No
1. Summary of the impact
Research by Professor Chris Skinner on design and estimation methods for cross-classified sampling has led to reductions in sampling error for a UK Department of Health and Social Care survey of medicines pricing. The results of this survey are used to determine how much community pharmacies in England are reimbursed for medicines dispensed via NHS prescriptions. The reduction in sampling error resulting from Skinner’s work has helped to ensure that reimbursement price adjustments are smoother and based on more accurate evidence. In turn, this helps ensure both efficiency in NHS spending on prescription medicines and a more stable and predictable NHS funding stream for community pharmacies. Ultimately, the research has contributed to increasing NHS finance in areas supporting improved health and individual wellbeing.
2. Underpinning research
Since joining LSE in 2011, and until his death in March 2020, Professor Chris Skinner led a continuing programme of research into the statistical methodology of sample surveys and censuses. This started with a 2011-2013 project on “Enhancing the use of information on survey data quality”, supported by an ESRC Professorial Fellowship. Skinner’s research was stimulated particularly by interactions with organisations conducting surveys and censuses. These included the Office for National Statistics (ONS), for whom he led an independent review of the methodology underlying future options for the census [1], and the European Social Survey team at City University, with whom he collaborated to address methodological challenges arising in this survey on two further ESRC grants in 2013-2016.
The research underpinning impacts described here was stimulated by interaction with the UK Department of Health and Social Care (DHSC) about methods for monitoring NHS expenditure on medicines prescribed by general practitioners (GPs). This expenditure amounts to around GBP9 billion a year. Spending is governed by the Community Pharmacy Contractual Framework (CPCF), which was established in 2005 and renewed in 2019 for the period 2019-2024. Under the CPCF, pharmacies purchase medicines directly from the market; this determines the prices they pay for those medicines. Meanwhile, the DHSC sets the so-called “Drug Tariff” - the amount that pharmacies will be reimbursed for the cost of each medicine dispensed for an NHS prescription.
These arrangements allow pharmacies to retain a “medicines margin”, that is, the difference between the prices at which pharmacies purchase medicines and the amount they are reimbursed by the DHSC. The medicine margin is intended to incentivise pharmacies to purchase cost-effectively for the NHS by rewarding them for purchasing medicines at or below the reimbursement prices set in the Drug Tariff. The current CPCF sets an overall target of GBP800 million per annum for the medicines margin. In other words, community pharmacies in England should be able to make GBP800 million each year by purchasing medicines cost-effectively for the NHS. If the margin is much more than this, the NHS will be out of pocket. If it is much less, community pharmacies may not be being fairly reimbursed or properly incentivised to make cost-effective purchases. The DHSC monitors whether this margin is achieved through a Margin Survey of pharmacy invoices and adjusts the reimbursement as necessary. This involves increasing or decreasing the reimbursement prices of some medicines in the Drug Tariff in the following year to recoup any overspends or reimburse any underspends related to the margin from previous years.
The National Audit Office (NAO) reviewed the operation of the CPCF from its institution in 2005 to 2009 and found that the target for the medicines margin was exceeded by, on average, GBP277 million per annum (see Section 5, **[B]**). It concluded that “ uncertainty surrounding the actual level of the margin … should, in our view, have made getting a robust assessment of actual levels of margin more of a priority”. It further recommended that the DHSC “ be more timely in making adjustments to reimbursement prices … to manage the level of retained margin” and “ continue to work with recognised experts in survey design and analysis to maintain and improve the invoice survey”.
Further to these recommendations, the DHSC invited Skinner to work with them on improvements to the methodology of the Margin Survey. To that end, they provided him with funding support for the periods 2013-2015 and 2018-2020. Skinner accepted their invitation because the Margin Survey presented novel statistical challenges for accuracy assessment and sampling design. These challenges related especially to its complex sampling scheme, in which a sample of medicines is crossed with a sample of pharmacies in the context of strong temporal effects associated with a volatile wholesale market for generic medicines.
The DHSC had already established basic design and estimation methods for the survey, working under the advice of Dr Pedro Silva (a former PhD student of Skinner, now in Brazil). However, they were concerned about the accuracy of estimates of the margin and sought further advice on how to improve this. Skinner subsequently developed a new approach to accuracy (standard error) estimation for such a “cross-classified” design (published in **[2]**). Unlike the earlier approach, this enabled the separate and interacting effects of medicine sampling and pharmacy sampling to be identified. In turn, this allowed the medicine and pharmacy sample sizes to be adjusted separately and in an efficient way to achieve the desired accuracy improvements. Skinner also extended his approach to capture the temporal aspect of the sampling, demonstrating the gain in accuracy which could be achieved by a shorter “rotation period” for the medicine sampling.
A further challenge addressed in [2] was the influence of outlying observations on accuracy. Skinner developed standard, error-based rules for identifying “influential observations” and a more sophisticated framework for Winsorization of such observations to reduce their influence on mean and standard error estimators. The key outputs of the research relevant to the impacts outlined here are closed-form analytic expressions for point estimators and standard error estimators, which are approximately unbiased under the complex sampling schemes.
Skinner started this research in April 2013. He was solely responsible for the key theoretical research, which was submitted in May 2015 and published later that year [2]. A paper setting out the relevance of this research to a wider class of business surveys was presented at the International Conference on Establishment Surveys in Geneva in June 2016. The research advanced earlier survey sampling work on cross-classified sampling, and attracted significant international as well as national attention. In France, for example, Juillard et al. (2017, J. Amer. Statist. Ass.) extended some of the ideas in [2] and applied them to a French longitudinal survey on childhood.
From April 2013 to February 2016, the research was primarily about statistical theory, informed by analyses of the Margin Survey data undertaken by the DHSC. In March 2016, Yajing Zhu began her involvement in the research, taking responsibility for managing Margin Survey data supplied by the DHSC to LSE, and programming statistical methods. This enabled a study of the numerical properties of the methods. Data from 2013/14, 2014/15, and 2015/16 were supplied initially; these were updated with data from 2016/17 and 2017/18 in October 2018.
Key researchers: Chris Skinner, Professor of Statistics, October 2011 to February 2020 (deceased); Yajing Zhu, PhD student at LSE, January 2015 to October 2018 (now Data Scientist, Roche).
3. References to the research
[1] Skinner, C. J., Hollis, J., and Murphy, M. (2013). Beyond 2011: Independent review of methodology. Report to Office for National Statistics, London. Available at: https://www.ons.gov.uk/census/censustransformationprogramme/beyond2011censustransformationprogramme/independentreviewofmethodology
**[2] Skinner, C.J. (2015). Cross-classified sampling: some estimation theory. Statistics and Probability Letters, 104, pp. 163-168. DOI: 10.1016/j.spl.2015.06.001.
*The asterisked output best indicates the quality of the underpinning research. Statistics and Probability Letters is a high-quality internationally refereed journal.
4. Details of the impact
The Pharmaceutical Services Negotiating Committee (PSNC) promotes and supports the interests of NHS community pharmacies in England. The results of the Margin Survey have major financial implications for such pharmacies. Accordingly, the PSNC takes a keen interest in its methodology and the DHSC seeks to ensure agreement with the PSNC regarding any changes to this. Skinner met with DHSC and PSNC members roughly twice a year to share and discuss reports of the LSE research. These inputs guided both the agenda for the LSE research and the implementation in the Margin Survey of methods developed in that research.
Impacts on the methodology of the Margin Survey
The main impacts of the research on the methodology of the Margin Survey have been: (a) more reliable estimators of accuracy (standard errors); and (b) improved accuracy. More specifically:
New estimators of standard errors were introduced in 2016. These include methods for sub-annual estimates as well as for annual estimates, addressing the National Audit Office (NAO) recommendation about providing more timely results.
Research on how to make estimation more robust to outlying observations (in line with the NAO recommendations) led to new standard error-based rules for identifying “influential observations”; these were implemented in 2019. New Winsorization methods were also proposed and are currently under consideration.
Research into the dependence of the standard error on stratum sample sizes for both the drug and pharmacy samples supported a decision to increase the sample sizes for branded and generic drugs. This decision reflected research evidence showing that a greater effect could be achieved on reducing the standard error of the margin estimator by increasing the drug sample than by increasing the pharmacy sample.
Research into the dependence of the standard error on rotation of the drug sample led to the rotation period being shortened from six to three months in 2018/19. This led to an estimated reduction of about 15% in the standard error of the margin estimator.
Skinner’s role in this was the provision of expert - and, crucially, impartial - advice to both the DHSC and the PSNC as they negotiated an agreement on the optimal margin size. The changes outlined here had resourcing implications for both parties, who used the research insights provided by Skinner to support their productive collaboration on improvements to the Margin Survey. As a Public Health Analyst at the DHSC explains:
“The Margin Survey allows the DHSC to adjust reimbursements to pharmacies as necessary to make sure that they remain properly incentivised and fairly reimbursed, while protecting the NHS from overspending on the reimbursement of particular medicines…We sought to improve the accuracy of the model wherever possible and [Skinner] was able to provide impartial advice to both the DHSC and the PSNC on ways to achieve this. Changes to the sample size and its frequency, supported by [Skinner’s] research, helped improve the survey’s accuracy and confidence in its results.” [A]
Subsequent impacts on adjustments of reimbursement prices in the Drug Tariff
The changes in the Margin Survey methodology have, in turn, supported adjustments of the reimbursement prices in the Drug Tariff. These adjustments have been achieved because the new methods allow: (a) reduced deviations between actual and target margins; and (b) reduced variability in adjustments over time.
The scale of adjustments has significantly reduced from the NAO’s estimated deviations of hundreds of millions of pounds per annum in 2005-2009 (for the scale of this previous deviation, see **[B]**). Deviations between the actual and target margins reflect unpredictable market changes, as well as Margin Survey estimation error, and it is difficult to quantify the relative contributions of these two factors. However, based on consideration of the impacts on Margin Survey standard errors, the order of improved accuracy relative to the target margin, as well as of reductions in variation in adjustments, is in the tens of millions of GBP per annum.
Benefits to the NHS
In recent years the NHS has faced considerable cost pressures, including in relation to expenditure on medicines. By reducing errors in the Margin Survey estimation, Skinner’s research has helped ensure that the NHS is better able to recoup overspends on prescription drugs, supporting significant savings by reducing these sorts of losses. It is able to make this contribution because the adjustment of reimbursement prices for medicines is made only when evidence can be provided that the target margin has been missed. If that cannot be proven, the Drug Tariff cannot be adjusted and it is impossible for any overspend to be recouped. Inaccuracies in the estimated margin can therefore lead to a situation in which the NHS is unable to recoup overspends, simply because it cannot prove that the target margin has been missed.
For illustration, if the true annual margin is GBP860 million, the estimated margin is GBP850 million, and the standard error is GBP30 million, an approximate 95% confidence interval is GBP790-910 million. It may then be concluded that there is no statistical evidence that the target of GBP800 million has not been met and so the Drug Tariff does not need changing to recoup overspend. However, with a standard error of GBP20 million and a confidence interval of GBP810-890 million, it may be decided that there is an overspend of around GBP50 million (= GBP800-850 million) to be recouped.
These sorts of savings are desirable, not primarily for their own sake, but because greater efficiency in the use of NHS resources increases the amount of healthcare that can be financed from a given total NHS budget.
Benefits to community pharmacies and primary care networks
Community pharmacies have also faced severe cash flow challenges, with smaller independent pharmacies being especially vulnerable to “income variation and unpredictability” (see, for example, **[D]**). In October 2018, the PSNC noted that pharmacy reimbursement prices:
“will reduce by GBP10 million per month from November 2018 for the next five months (until March 2019). This is to repay excess margin earned by pharmacies in previous years, in particular 2015/16 for which the results of the Margin Survey show that there was a significant over-delivery of margin.” [E]
Against this backdrop, the PSNC expressed to HM Government its “ deep concerns about the financial pressures facing community pharmacy contractors and the fact that they would increasingly be unable to reinvest, given pressures from rising staff costs and business rates” [E]. The impact of such cost pressures can be a reduction of services or, ultimately, closure - there were 134 net closures of “bricks and mortar” pharmacies between November 2016 and April 2018 [F]. In the broader context of plans to cut their NHS funding by GBP170 million, the then-Health Minister Alistair Burt estimated in January 2016 that between 1,000 and 3,000 pharmacies could face closure ( BBC, 27 January 2016).
The research has helped ensure that community pharmacies receive a more stable and predictable funding stream from dispensing medicines. This results from the use of research findings to ensure that reimbursement price adjustments are smoother and made with accurate evidence from the Margin Survey [A]. In July 2019, the PSNC, NHS England and NHS Improvement, and the DHSC agreed a five-year deal for community pharmacies, guaranteeing funding levels until 2023/24. The deal secures pharmacy funding and sets out a clear vision for the expansion of clinical service delivery over the next five years, in line with the NHS Long Term Plan. Part of the agreement includes a commitment to:
“*working on a range of reforms to reimbursement arrangements to deliver smoother cash flow and fairer distribution of medicines margin and better value for money for the NHS.*” ( [G], p. 5)
The LSE research contributes to this commitment.
Benefits to patients
In May 2018, the CEO of the PSNC expressed concern about the effects of cost pressures on community pharmacies. In addition to closures, he noted that cuts meant that:
“[r]ather than investing in developments that could improve patient care and allow them to offer more services, contractors are having to consider reducing staff, opening hours and unpaid services, such as home delivery.” [F]
These sorts of changes all have potentially negative implications for patients, especially the vulnerable and those in areas of high deprivation where pharmacies have traditionally been able to help reduce health inequalities [F]. By helping to ensure that pharmacies have a more stable and predictable stream of income from prescription medicines, the ultimate impact of the research is therefore a contribution to increased NHS finance in areas supporting improved health and individual wellbeing.
5. Sources to corroborate the impact
[A] Supporting statement from Public Health Analyst, Department of Health and Social Care, 9 March 2021.
[B] National Audit Office (March 2010), " The Community Pharmacy Contractual Framework and the retained medicine margin".
[C] House of Commons Committee of Public Accounts, " Price increases for generic medications", Sixty-Second Report of Session 2017-19, 12 September 2018. See pp. 5-6.
[D] Pharmaceutical Services Negotiating Committee, “ PSNC Briefing 018/14: The settlement negotiations and the negotiating process - background information for contractors”, September 2014.
[E] Pharmaceutical Services Negotiating Committee, “ CPCF funding arrangements 2018/19”.
[F] “ Government figures show drop in pharmacy numbers since funding cuts”, The Pharmaceutical Journal, 31 May 2018.
[G] Department of Health and Social Care, " The Community Pharmacy Contractual Framework for 2019/20 to 2023/24: supporting delivery for the NHS Long Term Plan", 22 July 2019.
- Submitting institution
- The London School of Economics and Political Science
- Unit of assessment
- 10 - Mathematical Sciences
- Summary impact type
- Societal
- Is this case study continued from a case study submitted in 2014?
- No
1. Summary of the impact
High-quality political polling is a significant element in the good conduct of democratic politics. In the UK, public confidence in polling was badly damaged by the failure of the polls to correctly predict the outcome of the 2015 General Election. Professor Jouni Kuha was the only statistician appointed by the British Polling Council (BPC) and the Market Research Society (MRS) to a panel of Inquiry to investigate this poor polling performance. The panel’s findings led to changes to BPC and MRS rules and to the methodological procedures used by commercial polling companies. Its research also influenced the conclusions and recommendations of the House of Lords Select Committee on Political Polling and Digital Media. As a result, it has influenced regulation of UK political polling, polling methodology, media reporting of polls, and the reputation and commercial prospects of the polling industry. By providing a more robust tool to use in generating poll results, it has contributed to the provision to parties and voters of more accurate information, and thereby to better democratic governance.
2. Underpinning research
The research underpinning impacts described here originates from the work of the panel of the BPC/MRS polling Inquiry. The bulk of this research was carried out in 2015-16. It was published in the report of the Inquiry [1] and in an associated academic article in the Journal of the Royal Statistical Society [2]. The latter includes a concise version of the Inquiry findings, but also provides more detailed technical information about the methodology of election polls. There are three main elements to the research published in [1] and [2]:
(i) A technical explanation and analysis of the methodology of UK election polls
The organising principle here was to describe the methodology explicitly in terms of general theory of estimation for non-probability samples, something which had not previously been done for election polls. This formulation makes it easier to identify the different elements of the methodology, to examine sources of error in them, and to suggest improvements to them.
(ii) Empirical analysis of the potential causes of the failure of the polls in 2015, organised by the methodological elements identified in (i)
This work concluded that the polling error in 2015 was caused mostly by unrepresentativeness of the poll samples, which was not sufficiently mitigated by the weighting procedures employed by the polling companies. In other words, the samples systematically over-represented Labour supporters and under-represented Conservative supporters, even conditional on the weighting variables. The research was able to rule out a range of other potential causes of the error, including turnout weighting, postal voting, overseas voting, and late swing.
(iii) A set of recommendations to the polling industry, drawing on (i) and (ii)
The Inquiry produced 12 recommendations to the polling industry. Recommendations 1-5 addressed the methodology of how election polls are collected and analysed. They included calls for BPC members to take measures to obtain more representative samples conditional on their weighting variables; to review existing methods for determining turnout probability weights; and to investigate new quota and weighting variables. Recommendations 7-10 looked at registration and transparent reporting of the polls. Recommendations 11-12 concerned calculating and reporting uncertainty in poll estimates. Recommendation 6 was to the Economic and Social Research Council (ESRC), and dealt with additional survey data collection. The methodological recommendations derived directly from the general formulation of the polling methodology and the empirical findings presented in [1].
The research in [2] also proposed a new bootstrap resampling method of estimating uncertainties and confidence intervals for estimated vote shares from non-probability samples in election polls, and for changes and differences in vote shares. The panel suggested that this would improve existing methods of calculating these uncertainties, which were based on an unrealistic approximation that polls behave as if they were simple random samples. This, it was shown, can give a misleading idea of the true sampling uncertainty in the polls.
The polling Inquiry panel was chaired by Professor Patrick Sturgis (University of Southampton until 2019, now LSE). Its eight other members were drawn from both the polling industry and academia; Kuha was its only specialist statistician. The panel’s work was divided into streams exploring possible problems with election polling (that is, the different possibilities under [ii], above). Kuha worked principally on the related topics of representativeness of the samples and calibration weighting, which emerged as the primary explanations for the polling failure in 2015. He also took the lead in developing and presenting the formal statistical elements of the research ( [i], above).
Kuha was invited to join the panel on the basis of prior research establishing him as an expert in statistics for the social sciences, in survey methodology, and in collaborative research with social scientists in different areas using survey data (see, for example, **[3]**- **[6]**). Kuha also had specific prior knowledge of election polling. This was gained particularly from work for the BBC/ITV/Sky exit poll, where he has been the lead statistician of the analysis and prediction team for the UK General Elections in 2010, 2015, 2017, and 2019.
3. References to the research
[1] Sturgis, P., Baker, N., Callegaro, M., Fisher, S., Green, J., Jennings, W., Kuha, J., Lauderdale, B., and Smith, P. (2016). Report of the Inquiry into the 2015 British General Election Opinion Polls. Market Research Society and British Polling Council. Available at: http://eprints.ncrm.ac.uk/3789/. Downloaded >28,000 times at August 2020.
[2] Sturgis, P., Kuha, J., Baker, N., Callegaro, M., Fisher, S., Green, J., Jennings, W., Lauderdale, B., and Smith, P. (2018). An assessment of the causes of the errors in the 2015 UK General Election opinion polls. Journal of the Royal Statistical Society, Series A, 181(3), pp. 757-781. DOI: 10.1111/rssa.12329.
[3] Curtice, J., Fisher, S. D., and Kuha, J. (2011). Confounding the commentators: How the 2010 exit poll got it (more or less) right. Journal of Elections, Public Opinion & Parties, 21(2), pp. 211-235. DOI: 10.1080/17457289.2011.562612.
[4] Kuha, J. and Jackson, J. (2014). The item count method for sensitive survey questions: Modelling criminal behaviour. Journal of the Royal Statistical Society, Series C (Applied Statistics), 63(2), pp. 321-341. DOI: 10.1111/rssc.12018.
[5] Sturgis, P., Brunton-Smith, I., Jackson, J., and Kuha, J. (2014). Ethnic diversity, segregation, and the social cohesion of neighbourhoods in London. Ethnic and Racial Studies, 37(8), pp. 1286-1309. DOI: 10.1080/01419870.2013.831932.
[6] Bukodi, E., Goldthorpe, J. H., Waller, L., and Kuha, J. (2015). The mobility problem in Britain: New findings from the analysis of birth cohort data. British Journal of Sociology, 66(1), pp. 93-117. DOI: 10.1111/1468-4446.12096.
4. Details of the impact
Political polling plays an important role in shaping party strategies, media presentation of election campaigns, and public perceptions of the political landscape. The information it provides therefore has the potential to influence democratic processes. Because election polls are highly visible, their quality and accuracy can also have a major effect on the reputation of, and public confidence in, surveys and public opinion research more widely. The work outlined above has delivered direct impacts in several areas relevant to this.
Changing the rules and regulations of political polling in the UK: members of the British Polling Council (BPC) include all the major polling companies who carry out regular election polls in the UK. The work of the Inquiry panel and the resulting report [1] led directly to changes in BPC rules about the ways in which political polling is conducted and reported. In June 2016, the Council confirmed its agreement to immediately implement rule changes corresponding to the reporting and transparency Recommendations 7-9 of the report [1] (but postponed a response to Recommendation 10, on pre-registration of polls) [A]. In May 2018, it also responded to Recommendation 11 by introducing a new requirement for its members to publish a statement about the level of uncertainty in poll estimates of parties’ vote shares [B]. Responding to the methodological Recommendations (1-5), the BPC further advised that it was “ for individual member companies to decide how best to take these forward”. Some of the ways in which member companies did this are discussed below (see “Impacts on industry practice”).
The work has also had impacts on the regulation of polling in the UK via its use in public policy debate on the subject. Impacts here derive primarily from its influence on the conclusions and recommendations of the House of Lords (HoL) Select Committee on Political Polling and Digital Media, appointed on 29 June 2017 to consider the effects of political polling and digital media on politics. The Committee’s final report was published on 17 April 2018 [D]; the government’s response to it was received on 15 June 2018 [E]. Sturgis served as Specialist Advisor to the Committee and five other members of the Inquiry panel - including Kuha - gave evidence or provided briefings to it. The Committee’s report [D] refers extensively to [1], which helped set the parameters for its own inquiry. Thus, for example, the Committee explains that, since it had been “ comprehensively covered” in [1], “ We have not...attempted to replicate this work by delving in detail into the methodological causes of polling errors” [D*, para 13, p. 11 ] . The Chair of the Committee, Lord Lipsey, reported in a 2019 interview with the University of Southampton that: “ the BPC inquiry was one of the key cornerstones of our work constructing the Lords inquiry and in informing our view of the present polling outlook” [G, p.30 ].
The Select Committee’s recommendations on the regulation of the polling industry included a substantially expanded oversight and advisory role for the BPC. It did not, however, propose government regulation of political polling or banning polls close to elections, both of which it had considered. The spirit of these recommendations was echoed in the government response: “ Polling standards should remain self-regulated by the industry [… who…] already have a strong incentive to update and improve their techniques, especially if flaws are uncovered” [E, p. 1 ].
Impacts on industry practice: a second major area of impact - resulting largely from these effects on the rules and recommendations governing polling in the UK - has been changes in methodology and procedures within the polling industry . These have had direct effects on decisions and operations within, and outcomes for, individual polling companies. According to an associate director at Opinium, the principal value of the BPC Inquiry after the 2015 election was that it: “… was able to point quite definitively to a system-wide problem, larger than just an issue which might have only been affecting our Consumer Panel or other things which would have been unique to one company” [G, p. 37 ]. This more rigorous, system-wide understanding underpinned by [1] helped public opinion research companies to identify and address problems in their methods. In a BPC report published in 2017 [C], nine BPC-member companies summarised changes to their polling procedures in response to [1]. These particularly included changes to sampling, weighting, and turnout adjustment procedures.
The influence of the panel’s work on industry practice is also apparent in evidence provided to the HoL Select Committee. Joint evidence submitted by eight of the companies explains:
“The Joint Inquiry led by Professor Patrick Sturgis into political polling…after the 2015 General Election identified that the main cause of polling error was unrepresentative samples. As such, one of the most pressing issues for the industry to tackle has been to improve the quality of their sample, an undertaking pursued across the membership of the British Polling Council.” [F, PPD0014, p. 154 ]
Similarly, YouGov made explicit reference to its own work to address the key findings of the Inquiry, including a substantial investment in efforts to improve recruitment to its panel of respondents [F, PPD0016, p. 549 ]. Its Director of Political and Social Research confirmed that “ the [changes] that came out directly from the report findings were…the introduction of quota and weighting by political interest and the introduction of education weights” [G, p. 12 ].
The polling Inquiry has also had impact beyond the UK. A comparable inquiry was recently conducted along similar lines in Australia to examine the performance of the polls before the federal election there in 2019 [H]. According to the Chair of that inquiry panel: “ Not only was the Sturgis et al. review heavily cited, it was also tremendously influential in helping to frame the Australian inquiry and guide some our analysis” [I].
Benefits to the polling industry
Improving polling accuracy: the polling industry’s efforts appeared to bear fruit in the polls for the General Election in December 2019, for which the companies further refined their methodologies within the framework outlined in the Inquiry report [J]. The polls correctly predicted the general result of the election, and the BPC described their predicted vote shares as “ more accurate … than in any contest since 2005” [K].
Commercial benefits: the changes in industry practice have had important knock-on effects on the success of individual companies - and, thereby, on the industry’s contribution to the UK economy. Market research and public opinion research are big business in the UK. According to MRS, the UK’s research market is “ second only to the United States”, employing up to 73,000 people and generating GBP4.8 billion in annual gross value added. Election polling represents only a small part of this business, but one which is important because of its unique visibility and accountability. This makes it “ a ‘shop front’ for polling organisations - an activity aimed at increasing their public profiles and advertising their accuracy” [D, Appendix 5, p. 99 ]. The reputation of that “shop front” was badly damaged in 2015. As well as improving the reliability of polling methods, evidence submitted to the Select Committee acknowledged the impacts of [1] on transparency in the industry. Providing evidence for the BPC, its President Professor Sir John Curtice credits [1] with influencing the fact that: “ …if any company changes in any way the way in which it has collected or estimated its voting intention data during an election or referendum campaign, it has to make that public” [J, QQ 139–147, p. 89 ]. These changes have already begun to improve the polling industry’s reputation among the media and other stakeholders. According to Opinium, [1] itself was “ extremely valuable in that it showed that we were being transparent and were committed to resolving the various issues”. Public engagement work by the Inquiry panel was also “ extremely helpful” in “rehabilitating a bit of the industry's image after 2015” [G, pp. 37-38 ] . By providing actionable (and now actioned) recommendations to improve polling, the Inquiry has contributed to restoring confidence in this highly visible facet of the UK’s “business of evidence” industry.
Benefits to users of polling data: beyond the commercial benefits for pollsters and their industry, changes in polling practices have also delivered important benefits to those who use polling data. Key users and consumers of political polls include press and online media covering elections, political parties running in those elections, and voters themselves.
It is generally agreed that pre-election polling helps shape media coverage and therefore the “narrative” of elections; in turn, this may be seen to affect public perceptions of an election and decision-making by political parties. This view was echoed in evidence given to the HoL Select Committee by the Director of Editorial Policy and Standards at the BBC: “ Our concern about the 2015 and 2017 general elections and the Scottish and EU referendums was the capacity of the polls to influence the journalistic narrative of those election campaigns” [D, para. 82, p. 27 ]. Certainly, it is well known that political parties take an interest in the results of more prominent newspaper polls, as well as using private polling to inform decision-making, and the HoL Select Committee heard evidence that this can influence the strategic approach and decision-making of those parties [D, para. 83-91, pp. 27-29 ].
The report [1] has improved understanding among journalists about the reporting of polls. A representative of the BPC explained that it provided “ a very useful tool for the media to help understand exactly what had gone wrong [in 2015]…It made it easier for them to in turn explain to their readers [and] viewers what had happened” [G, p. 26 ]. The report also underpinned the development of new media recommendations about the use of polling data, now published by both BPC and MRS [L]. According to the Director of Deltapoll:
“…following the Inquiry the British Polling Council hardened its recommendation to journalists on how they should use and treat polling. I think lots of people now don’t look at individual polls to the extent that they did, but view them as part of a series and a trend and look at the general patterns rather than individuals, which obviously implies outliers probably get less recognition than they used to.” [G, p. 35 ]
Finally, political polls are one of the most prominent forms of information available to the voting public. One cannot make any very strong claims about whether and how polling may affect voting, as the empirical evidence on this question is limited. Whatever such effects there may be, however, it is clearly preferable that decision-making is based on accurate information. As the HoL Select Committee concluded:
“…voting intention polls play a hugely significant role in shaping the narrative around political events such as elections and referendums. Given the impact that they can have on political discourse, they will inevitably influence public behaviour and opinions, even if only indirectly. It is therefore vital that work continues in order to try to improve polling accuracy and that this is done as transparently as possible.” [D, para. 92, p. 29 ].
The work of the polling Inquiry has contributed to the achievement of this goal. By helping to improve the quality of polling, it has improved the accuracy of the information available to those who use voting data, helping them to develop better-informed campaign strategies and voting choices, and to present and receive fairer and more accurate election coverage.
5. Sources to corroborate the impact
[A] British Polling Council (BPC), “ BPC Inquiry Report”. Press release, 31 March 2016. Summarises responses to the recommendations of Sturgis Inquiry.
[B] BPC, “ British Polling Council Introduces New Rule on Uncertainty Attached to Polls”. Press release, 1 May 2018.
[C] BPC, “ How Have The Polls Changed Since 2015?”. Press release, 26 May 2017. Reports changes made to individual polling companies’ methodologies in response to the Inquiry.
[D] House of Lords Select Committee on Political Polling and Digital Media, “ The Politics of Polling”. Final report, published 17 April 2018.
[E] Department for Digital, Culture, Media & Sport. “ Government response to House of Lords Select Committee on Political Polling and Digital Media report on ‘The politics of polling’”, June 2018.
[F] Select Committee on Political Polling and Digital Media. Oral and Written Evidence. Recommendations made by the Inquiry panel are also endorsed and supported by the Royal Statistical Society in its written evidence at PPD0022 (p. 447).
[G] S. Jarvis (2019). Understanding and Improving Political Polling. Impact Case Study Report. Report produced by the University of Southampton. Available on request.
[H] D. Pennay et al. (2020). Inquiry into the Performance of the Opinion Polls at the 2019 Australian Federal Election.
[I] Email from Chair of the Inquiry into the Performance of the Opinion Polls at the 2019 Australian Federal Election. Received 2 October 2020. Available on request.
[J] BPC, “ Principal Changes in the Conduct and Reporting of Polls in the 2019 General Election”. Press release, 29 November 2019.
[K] BPC, “ The Performance of the Polls in the 2019 General Election”. Press release, 13 December 2019.
[L] BPC Guidance for Journalists and MRS Interpreting polls and election data - guidance for media and journalists. Accessed 23 November 2020.
- Submitting institution
- The London School of Economics and Political Science
- Unit of assessment
- 10 - Mathematical Sciences
- Summary impact type
- Economic
- Is this case study continued from a case study submitted in 2014?
- No
1. Summary of the impact
LSE research on systemic risk and financial contagion in financial markets has informed the work of policymakers at the Bank of England (BoE), helping to protect and enhance financial stability in the UK and beyond. It has particularly informed the inclusion by the BoE of a model for financial contagion in its annual concurrent stress test in 2016 and 2017; that model builds on a framework developed by the LSE. In both these years the BoE stress test covered seven major UK banks and building societies: Barclays, HSBC, Lloyds Banking Group, Nationwide, The Royal Bank of Scotland Group, Santander UK, and Standard Chartered. These banks and building societies (hereafter referred to collectively as “banks”) account for around 80% of banks’ lending to the UK real economy regulated by the Prudential Regulation Authority. Ensuring that they are able to withstand a potential financial shock is therefore important to maintaining the wider financial stability of the country.
2. Underpinning research
The research underpinning impacts described here was carried out within a wider body of work conducted by Dr Luitgard Veraart on the use of network models to assess systemic risk and financial stability. In a 2013 paper [1], Veraart and her co-author Professor L. C. G. Rogers (University of Cambridge) modelled the interbank market as a directed graph of interbank obligations. If one or more banks default on their payment obligations, losses spread throughout the network and can cause other banks in the system to default. The model described in [1] allows users to compute how much each of these banks is able to pay at the end of the default cascade. These payments are referred to as clearing payments.
The model set out in [1] builds on the modelling paradigm of Eisenberg and Noe (2001) but includes the important additional feature of allowing for default costs. This immediately introduces novel and realistic effects, since the presence of default costs significantly changes the default cascade. Without default costs, the network spreads losses, but cannot amplify them. In the presence of default costs, however, the initial losses causing an institution to default can be substantially amplified while the default cascade runs its course.
Because these amplification effects are a key concern for policymakers, it is important to ensure that they are captured in the models used in today’s stress tests. In a paper on default contagion in financial networks [2], Veraart generalised the framework outlined in [1] to allow for financial contagion to be triggered, not necessarily by the default of an institution alone, but also by mark-to-market effects. These effects were a vital driver of losses in the 2007-2009 Global Financial Crisis. The paper [2] further illustrated the ways in which the framework described in [1] and generalised in [2] could, in principle, also be applied even if only partial information is available about the underlying network of exposures. This can be achieved using a Bayesian approach to systemic risk assessment, which Veraart had previously developed in [3] and [4].
More recently, Veraart has undertaken a new programme of collaborative research with the Bank of England on developing models for and analysis of system-wide stress. This research has analysed liquidity stress in the repurchase agreement (repo) market, looking at the wider financial system beyond the banking sector. A particular focus of the analysis was on liquidity stress during the early phase of the Covid-19 pandemic.
3. References to the research
[1] Rogers, L. C. G. and Veraart, L. A. M. (2013). Failure and rescue in an interbank network. Management Science, 59(4), pp. 882-898. DOI: 10.1287/mnsc.1120.1569. Work on this output began in 2007, but the paper was very significantly refined and extended between Veraart joining LSE in September 2010 and its acceptance for publication in February 2012.
[2] Veraart, L. A. M. (2020). Distress and default contagion in financial networks. Mathematical Finance, 30(3), pp. 705-737. DOI: 10.1111/mafi.12247.
[3] Gandy, A. and Veraart, L. A. M. (2017). A Bayesian methodology for systemic risk assessment in financial networks. Management Science, 63(12), pp. 4428-4446. DOI: 10.1287/mnsc.2016.2546.
[4] Gandy, A. and Veraart, L. A. M. (2019). Adjustable network reconstruction with applications to CDS exposures. Journal of Multivariate Analysis, 172, pp. 193-209. DOI: 10.1016/j.jmva.2018.08.011.
Based partly on the research described above, Veraart was the co-winner of the University of Cambridge Adams Prize 2019, awarded each year by the Faculty of Mathematics and St John’s College to UK-based researchers under the age of 40 conducting first-class international research in the mathematical sciences. The Adams Prize is one of the oldest and most prestigious prizes awarded by the University of Cambridge.
4. Details of the impact
Veraart has worked regularly with the Bank of England (BoE) since 2015. From October to December 2016, she served as a BoE George Fellow, in which capacity she was based full-time in the Stress Testing Strategy Division of the Financial Stability Strategy and Risk directorate [A]. The primary impact of the research outlined above is its use by the BoE to conduct stress testing in 2016 and 2017. Its use in this context means that the research has contributed to the fulfilment of one of the BoE’s key objectives: to protect and enhance the stability of the UK financial system.
Helping meet statutory objectives
The main aim of the BoE’s stress testing framework is to help the Financial Policy Committee (FPC) and the Prudential Regulation Authority (PRA) to meet their statutory objectives. The FPC’s primary objective is to contribute to the BoE’s financial stability objective to protect and enhance the stability of the UK financial system. It is also a general objective of the PRA to promote the safety and soundness of the banks it regulates. Stress testing, which is used to analyse the resilience of an object or system under extreme (adverse) conditions, supports both of these objectives. Stress tests are used both to measure risks and to manage risk by setting prudential policy. Their use in finance has significantly increased since the 2007-2009 Global Financial Crisis, which showed the fault lines of existing risk management practice. Following recommendations made by the Basel Committee on Banking Supervision (the primary global standard-setter for the prudential regulation of banks), regulators around the world have developed and implemented new stress testing frameworks to both measure and manage risk in financial markets. The Basel Committee notes that: “Stress testing is now a critical element of risk management for banks and a core tool for banking supervisors and macroprudential authorities” [B].
New tools for UK stress testing
In the United Kingdom, the BoE has conducted an annual stress test since 2014. The main purpose of this is to “provide a quantitative, forward-looking assessment of the capital adequacy of the UK banking system and individual institutions within it” [C, p. 7 ]. The first part of a stress test is to design a stress scenario. Each year since 2016, the BoE has considered a scenario whose severity reflects policymakers’ assessment of the current risk environment. This so-called “annual cyclical scenario” (ACS) is used to test the resilience of the UK banking system to factors such as deep simultaneous recessions in the UK and global economies and financial market stresses. The ACS is counter-cyclical, meaning it will be more severe during market conditions in which a large amount of risk has built up and less severe when these risks have realised or decreased. Additional scenarios are considered biannually.
The second part of the stress testing exercise is concerned with evaluating the impact of the stress scenario on banks’ balance sheets and, in particular, their capital positions. It is important that suitable models and methods are used to conduct this analysis, because the results of the stress testing exercise are used to set regulatory capital buffers and to determine whether banks need to improve their capital positions [D]. The LSE research described here has had impacts on this aspect of the BoE stress test.
In 2015, the BoE identified the modelling of system-wide dynamics and feedback mechanisms as a key priority for its stress testing framework. As a result, it particularly sought to develop tools facilitating the exploration of system-wide dynamics. The 2007-2009 crisis had demonstrated the vital importance of spillovers and feedback channels - both between financial institutions and between the financial sector and the real economy - to quantifying the likely impacts of financial stresses. The need to understand these channels made analysis of them, such as that of the interbank lending channel analysed in [1], an important element of stress tests.
Testing solvency contagion via interbank lending
In 2016, the BoE stress test included, for the first time, testing of solvency contagion via interbank lending [E, p.34 ]. The BoE’s solvency contagion model examines how deteriorating capital positions lead to revaluation of interbank debt claims, which can in turn further affect banks’ capital positions. The model used to conduct the solvency contagion test introduced in 2016 builds on the modelling framework set out in [1]. This is particularly apparent in its newly explicit inclusion of exogenous bankruptcy costs within the modelling framework. A subsequent BoE Staff Working Paper describing the development of the new solvency contagion model cites both [1] and [3] and acknowledges Veraart’s input [F]. The Executive Director of Financial Stability Strategy and Risk has subsequently further confirmed the importance of the underpinning research to the development of the BoE's stress testing:
“[Veraart’s] research has informed the Bank’s modelling and analysis, in particular on incorporating feedback and amplification mechanisms in the Bank of England annual cyclical scenario (ACS) stress test. Solvency contagion was the first amplification mechanism included in the Bank of England’s stress test in 2016. The methodology […] builds on research by Dr Veraart, in particular her work on financial networks (Rogers and Veraart (2013) **[1]**).” [A]
Understanding and incorporating feedback loops and amplification effects is paramount. In a 2017 speech, the BoE Executive Director for Financial Stability Strategy and Risk stated that it was these feedback loops that “helped to turn around USD300 billion of subprime mortgage-related losses into well over” USD2.5 trillion of potential write-downs in the global banking sector within one year [G, p.6 ].
The new model, which was also used as part of the 2017 stress test [H], helped to address two of the BoE’s key priorities: “ developing a genuinely macroprudential approach to identifying risks in the banking sector; and enhancing the Bank’s modelling capabilities as part of the concurrent stress tests of the banking system” [E].
Supporting stability during the Covid-19 pandemic
Veraart’s work on financial stability has continued to deliver further benefits for the BoE beyond these direct impacts on the 2016 and 2017 stress tests. In 2020, for example, she worked with economists at the BoE to explore liquidity stress in the repo market, looking at the wider financial system beyond the banking sector. As the BoE Executive Director of Financial Stability Strategy and Risk explains, this more recent work " has informed policymakers in the context of analysing the stress in financial markets observed during the Covid-19 pandemic" [A]. Veraart’s contribution to the BoE’s work on financial stability is further realised through her membership of the Academic Advisory Group to the One Bank Research Steering Committee, which oversees the direction of the BoE’s research.
Adams Prize
The development of new tools supporting financial stability was recognised in the aforementioned award to Veraart of the 2019 University of Cambridge Adams Prize. Professor Mihalis Dafermos, Chair of the Adams Prize Adjudicators, noted that:
“Dr Veraart has developed new tools and concepts relevant for the representation and analysis of financial stability and systemic risk in banking networks. Her work has had considerable visibility and impact, both within academia and outside” [I].
5. Sources to corroborate the impact
[A] Supporting statement from Executive Director, Financial Stability Strategy and Risk, Bank of England (and also a member of the Financial Policy Committee), 18 December 2020.
[B] Basel Committee on Banking Supervision (October 2018), “ Stress testing principles”. This replaces Basel Committee on Banking Supervision (May 2009), “ Principles for sound stress testing practices and supervision”.
[C] Bank of England (October 2013), “ A framework for stress testing the UK banking system”. Discussion paper.
[D] Bank of England (October 2015), “ The Bank of England’s approach to stress testing the UK banking system”.
[E] Bank of England (November 2016), “ Stress testing the UK banking system: 2016 results”. See, especially, p. 34, Box 3, which refers to the model described in [F].
[F] Bardoscia, M, Barucca, P., Brinley Codd, A., and Hill, J. (2017), “ The decline of solvency contagion risk”, Bank of England Staff Working Paper No. 662. See pp. 3, 6, and 9 for reference to [1] and p. 4 for reference to [3]. Veraart’s input is also referenced in the Acknowledgments (p. 17). The source code used to run the simulations, referenced at p. 18 (available at https://github.com/marcobardoscia/Neva) includes a file ibeval.py which contains a function rogers_veraart (definition starts on line 353). This implements in Python the model proposed in [1].
[G] “ How to: MACROPRU. 5 principles for macroprudential policy”, speech given by Executive Director for Financial Stability Strategy and Risk, Bank of England, at the London School of Economics Financial Regulation Seminar, 13 February 2017.
[H] Bank of England (November 2017), “ Stress testing the UK banking system: 2017 results”. For use of the solvency contagion model beyond 2016, see p. 40.
[I] “ Adams Prize winners 2018-19 announced”, St John’s College, University of Cambridge, 5 March 2019.