Foreword
On the basis of his thorough review of the Bank of England’s (Bank’s) macroeconomic forecasts and their role in policy preparation and communication (Bernanke (2024)), former Federal Reserve Chair Ben Bernanke offered several recommendations to improve the forecasting process underlying the decisions of the Monetary Policy Committee (MPC).
One such recommendation included the suggestion that ‘the staff should be charged with highlighting significant forecast errors and their sources, particularly errors that are not due to unant…
Foreword
On the basis of his thorough review of the Bank of England’s (Bank’s) macroeconomic forecasts and their role in policy preparation and communication (Bernanke (2024)), former Federal Reserve Chair Ben Bernanke offered several recommendations to improve the forecasting process underlying the decisions of the Monetary Policy Committee (MPC).
One such recommendation included the suggestion that ‘the staff should be charged with highlighting significant forecast errors and their sources, particularly errors that are not due to unanticipated shocks to the standard conditioning variables. Models and model components that may have contributed to forecast misses should be regularly evaluated and discussed’.footnote [1]
This Forecast Evaluation Report is one part of the Bank of England’s response to Bernanke’s recommendation. Building on previous work by Bank staff,footnote [2] the Report undertakes a thorough statistical evaluation of the accuracy, unbiasedness and efficiency of the forecasts published in the MPC’s regular Monetary Policy Reports. It also presents a more detailed narrative of forecasting experience and performance over recent years.
Bank staff plan to develop and extend these analyses in future years, with subsequent published reports to follow. To support that evolution and provide an opportunity for external experts to offer their insights on Bank staff’s underlying technical analysis, the Bank is also publishing a Macro Technical Paper describing the tools that have been used to undertake the statistical analysis (Abiry et al (2026)) and the code used in the implementation of the approachfootnote [3] in parallel with the release of this Report. The Bank welcomes feedback on both the Forecast Evaluation Report itself and the underlying technical material and code.
To ensure this evaluation matches best practice, Bank staff have embodied three key elements into their analysis. First, Bank staff have recognised the importance of adopting a ‘real-time’ approach, employing only the data that were available at the time the forecast was made when benchmarking against alternatives (and highlighting the role of data revisions). Second, Bank staff have recognised the role played by the MPC’s conditioning assumptions in driving forecast profiles and thus influencing forecast errors. (In the forecasts published in the MPC’s Monetary Policy Report, one such conditioning assumption is the future path of Bank Rate, which is derived from market-implied expectations.) And third, Bank staff have recognised that new shocks will inevitably affect the economy in the period between making any forecast and the outturn against which it is judged.
By recognising these three potential sources of forecast errors, Bank staff’s evaluation allows more focus to be placed on those errors that arise from potential shortcomings or gaps in the Bank’s models and analysis (rather than errors owing to unanticipated shocks, which cannot be entirely avoided). In real time, such model or analytical shortcomings are impossible to remedy entirely, but there is much to learn from evaluation of those errors with the benefit of hindsight. As is apparent from the quotation from his review above, this approach is very much in the spirit of Bernanke’s recommendation.
Even as the role of the Bank’s forecasts in preparing monetary policy decisions evolves (and the forecasts are supplemented by other complementary analyses) in line with Bernanke’s other recommendations (Bailey (2025)), an evaluation of how the forecasts published in the MPC’s Monetary Policy Reports have performed as predictors of economic outcomes can help identify strengths and weaknesses in the Bank’s understanding of the UK economy (and allow Bank staff to refine their approach to forecasting in the future). Embracing the feedback implicit in forecast errors lies at the heart of the Bank’s efforts to ensure monetary policy making in the UK is conducted within ‘a culture of continuous learning’ that leads over time to improvements in the MPC’s and Bank staff’s understanding of the UK economy and the transmission of monetary policy within it (Lombardelli (2024)).
From a policymaking perspective, the over-riding question is not solely, or even mainly, whether a forecast performs well on the statistical criteria of accuracy, unbiasedness or efficiency. Rather, a forecast that supports the monetary policy process needs to help the MPC form a view of the economic outlook, to prompt a rich discussion among MPC members that leads to appropriate monetary policy decisions, and to help communicate its decisions consistently with those views both internally within the Bank and to external stakeholders. The MPC’s job is to set good policy to meet the inflation target, and that is how it should be judged. Forecasts – along with other analytical inputs – are means to that end.
Learning from forecast errors not only helps us make better forecasts, it also helps us to develop a better understanding of the economy. It is that better understanding that is crucial to enhancing the analytical inputs that underpin the MPC’s monetary policy decisions. Publication of this Report represents one of many important steps in that learning process.
Huw Pill
Chief Economist and Executive Director for Monetary Analysis and Research
1: Executive summary
The Bank of England’s Monetary Policy Committee (MPC) sets monetary policy to achieve the inflation target of 2%.
The MPC’s policy decisions are informed by analysis prepared by Bank of England (Bank) staff. An important component of that analysis is forecasts of the economy. Forecasts will remain an important element of this input to the MPC’s policy decisions, even as their role evolves in the light of the Bernanke Review of forecasting for monetary policy making and communication (Bernanke (2024) and Dhami et al (2025)). Following that review, Bank staff are widening the range of inputs presented to the MPC, in part to include more regular assessments of risks, uncertainty, and alternative policy paths.
The nature of forecasts presented in the Monetary Policy Report has also changed. These forecasts are projections, based on a staff proposal, that a majority of the MPC agrees are reasonable baselines, rather than representations of the MPC’s ‘best collective judgement’ as was the case previously (Bailey (2025)).
Even as the role of the Bank’s forecasts evolves and they are supplemented by other complementary analyses, an evaluation of how these forecasts have performed as predictors of economic outcomes can help identify strengths and weaknesses in the Bank’s understanding of the UK economy. This Report finds that the Bank’s forecasts have been at least as accurate as the average of external forecasters or a range of alternative model-based approaches over the past decade.
Where forecasts have underperformed, analytical investment can improve the Bank’s understanding of specific sectors or macroeconomic relationships. Evaluating forecast performance is therefore one way to prompt the ‘continuous learning’ about the structure of the economy that the Bernanke Review recommends (Lombardelli (2024)).
This Forecast Evaluation Report draws lessons from both analysis of historical and recent performance.
Among several recommendations to improve the Bank’s forecasting and policymaking processes, Bernanke suggested that Bank staff should identify and explain forecast errors.footnote [4] This Report addresses that suggestion, drawing lessons from both statistical analysis of longer-term forecast performance and a more detailed analysis of recent experience.footnote [5] It evaluates the Bank’s forecasts of four key policy-relevant variables: CPI inflation, GDP growth, wage growth and the unemployment rate. CPI inflation is the MPC’s target variable. GDP, unemployment and wages relate to key channels in the domestic inflation generation process according to economic theory. The interactions between these variables can also shed light on structural features of the UK economy. While these four indicators are important, they do not represent an exhaustive list of the variables analysed when formulating monetary policy. Future Forecast Evaluation Reports may explore different features of the economy and include coverage of different variables as a result.
This Report assesses the historical performance of the Bank’s forecasts using a range of statistical methods.
This Report evaluates the statistical properties of the Bank’s forecasts along three standard dimensions: accuracy, unbiasedness and efficiency. Accuracy metrics consider the magnitude of forecast errors, ie whether the forecast has typically been close to the realised outcome. Unbiasedness examines whether errors have been evenly distributed in both directions, or instead have tended to lean in a particular direction, ie whether forecasts have tended to be systematically too high or too low. Efficiency investigates whether forecasts have made effective use of information, ie whether the forecast could have been improved upon by better exploiting information available at the time it was made.
While a good forecast in statistical terms is one that performs well on these three dimensions, from a policymaking perspective the crucial issues are whether the forecast helps the MPC form a view of the economic outlook, prompts a rich discussion among the MPC that supports appropriate monetary policy decisions, and helps communicate its decisions consistently with those views.
Forecast accuracy is therefore not the only or most relevant metric for assessing the value of forecasts in the policymaking process. This is particularly the case for the forecasts presented in the Monetary Policy Report, which are conditioned upon certain assumptions for key variables (including the future path of Bank Rate) that may not represent the MPC’s own views of how those variables will evolve. Nevertheless, valuable lessons can be drawn from this statistical analysis.
The Bank’s forecasts have typically performed at least as well as a range of alternative forecasts over time, and across the selection of variables reviewed here.
Bank forecasts are compared with a range of alternatives to permit an assessment of their relative performance. Among the alternatives considered we include forecasts constructed using an autoregressive (AR) model, a Bayesian vector autoregression model (BVAR), a mechanical application of the Bank’s dynamic stochastic general equilibrium (DSGE) model known as COMPASS (Albuquerque et al (2025)), and a survey of external forecasters (SEF) (Box A). By making a relative comparison, this exercise controls for the impact of changes in uncertainty or volatility on forecast errors, since all approaches are subject to these in the same way.
This comparison shows that none of the alternative forecasts considered would have outperformed the Bank’s forecasts over the past decade. While the accuracy of the Bank’s forecasts has reduced since 2020, that is also true of the range of alternative benchmarks. This result suggests that much of the deterioration in forecast performance since Covid can be attributed to heightened general economic volatility rather than specific deficiencies in the Bank’s approach or framework. This result is also consistent with the Bernanke Review, which found the deterioration in forecast accuracy since the pandemic had been similar for the Bank as for other major central banks (Bernanke (2024)).
The statistical analysis presented here identifies areas for potential improvement, particularly for labour market variables.
The relatively good performance of the Bank’s forecasts on average over a long historical window does not mean that there is no room for improvement or nothing to learn. Looking at forecast performance on a variable-by-variable basis, the Bank’s forecasting of labour market variables – specifically wage growth and unemployment – is an area that would benefit from further work. Forecasts for wage growth have exhibited signs of bias either side of the Covid pandemic, while the unemployment rate has been overpredicted since 2015. Statistical tests also point to inefficiencies in the Bank’s GDP, wage and unemployment forecasts. Related Bank research has also flagged potential for improvement in the economic relationships embedded in the Bank’s forecast machinery that may help account for this forecast inefficiency (Kanngiesser and Willems (2024)).
Unforeseeable events explain some of the largest forecast errors since Covid, but the persistence of inflation is harder to reconcile with standard forecast treatments.
This Report also explores some of the drivers of the Bank’s forecast errors over the past five years in greater detail, setting out seven key findings. As noted by Bernanke (2024), a series of large surprises – or ‘shocks’ – made forecasting more challenging than usual during that period, leading to larger forecast errors for central banks around the world. Model-based simulations – or ‘counterfactuals’ – of what the Bank’s forecasts would have been had it known about some of those shocks in advance can be used to estimate their contribution to forecast errors.
The set of shocks that hit during this period can be characterised with a model-based decomposition of the economic drivers that account for the deviations of inflation from the 2% target (Section 5). Adapting a model developed by Bernanke and Blanchard (2025) to the UK (Haskel, Martin and Brandt (2025)) reveals a prominent role for those shocks. These included the aftermath of the Covid pandemic, as bottlenecks in supply chains contributed to elevated global export prices; Russia’s invasion of Ukraine which drove up energy and food prices globally; tightness in the UK labour market which has driven up wages and inflation; and weakness in productivity which has pushed up on firms’ costs.
Surprises in energy prices and other global factors can broadly account for the peak inflation error. In late-2022, CPI inflation reached a peak of 11.1 per cent, more than 8 percentage points above forecasts from mid-2021, before Russia invaded Ukraine. Counterfactual analysis suggests that subsequent energy price rises can account for around half of this error (key finding 1). Other global factors explain most of the remaining error (key finding 2). Partly offsetting this, and in response to higher inflation, Bank Rate rose further than assumed in the Bank’s forecasts (which use financial market expectations for the future path of Bank Rate). And there is some evidence to suggest that tighter monetary policy affected the economy more quickly than in the past (key finding 3).
While the impacts of these external shocks can broadly explain the peak errors, they cannot fully account for the subsequent persistence of inflation. After 2022, the Bank’s medium-term inflation and wage growth forecasts proved repeatedly too low. Some of this strength in inflation likely reflected ‘second-round effects’ from higher inflation feeding back to stronger wage growth. It is also possible that there have been more long lasting, or ‘structural’ changes in price and wage setting in recent years (key finding 4).
GDP growth has been stronger than forecasts expected on average since 2021, which (along with slightly lower estimates of potential supply that we do not evaluate directly in this Report) has contributed to there being less spare capacity in the latest estimates than initially projected (key findings 5 and 6). Forecast performance for GDP growth – and for labour market variables – appears more favourable when assessed using initial data releases than when using later data vintages, but the latter may be a better reflection of the ‘truth’ as more information is incorporated into official statistics over time (key finding 7).
These findings will continue to inform improvements to the Bank’s forecast models and processes, as well as other inputs to policymaking.
A number of changes have already been made that address the results presented in this Report. For example, the treatment of energy within the Bank’s forecast machinery has been improved based on the experience of recent years (Albuquerque et al (2025)). The speed of monetary transmission within the Bank’s policy analysis toolkit has been adjusted to match the latest Bank staff analysis (Alati et al (2025)). And internal processes for monitoring supply developments have become more frequent, acknowledging the increasingly supply-driven character of economic fluctuations.
Future model development will seek to improve the Bank’s modelling and understanding of key economic mechanisms, including the labour market, wage-price interactions and inflation expectations, which should help to explain recent inflation persistence.
The remainder of this Report starts by giving an outline of how the Bank’s forecasts are produced and the role they play at the Bank (Section 2). It then gives an overview of the forecast evaluation approach undertaken (Section 3). It follows with a summary of some of the longer-term statistical properties of the Bank’s forecast (Section 4) before focusing in on some of the key findings from more recent forecast errors (Section 5). It finishes with some forward-looking takeaways (Section 6).
2: Forecasting at the Bank of England
2.1: How the Bank’s forecasts are produced
The Bank’s forecasts are constructed using a broad set of models, information and judgement.
The Bank produces forecasts for a range of macroeconomic variables that are relevant for monetary policy makers. These forecasts are produced four times a year and published alongside the quarterly Monetary Policy Report.
The Bank’s forecasts draw on steers from a range of models, data, surveys, analysis and intelligence from the Bank’s Agency network. These sources are also often supplemented by Bank staff and MPC judgements to adjust for model limitations or information that models may not account for directly. The precise approach and models underpinning the production of the Bank’s forecasts are often updated and have changed over time. Table 2.A gives an overview of how these forecasts are constructed currently.
The forecasts are conditional projections.
The Bank’s forecasts are conditional in nature, as they take several input paths as given. Conditioning paths for Bank Rate, world interest rates, global commodity prices, the exchange rate,footnote [6] and wholesale energy prices are based on market expectations. Non-wholesale energy costs are assumed to evolve in line with Office of Gas and Electricity Markets (Ofgem) projections. Fiscal policy is assumed to follow announced government policy, as reflected in the Office for Budget Responsibility’s (OBR’s) projections.
An implication of this approach is that the resulting projections may not always correspond to the MPC’s single most likely expectation of the outcome. For example, if the conditioning paths for one or more of these variables did not represent the MPC’s own views of how they will evolve. Temporary inconsistencies between conditioning paths may also arise which can lead to forecast errors. For instance, if fiscal policy was widely anticipated to loosen relative to previous policy announcements reflected in the OBR’s projections, financial market participants might expect a higher Bank Rate path. As the Bank’s forecasts condition on these financial market expectations, that might show up as a drag on the forecast for GDP growth.
The near-term outlook is informed by the latest data and statistical models.
The first two quartersfootnote [7] of the forecast are informed by statistical models rather than the Bank’s structural forecasting framework. These models are purely data driven, extrapolating recent time-series dynamics and incorporating high-frequency information, and tend to deliver greater accuracy than structural models for short-term forecasting (Giannone et al (2016)). The first two quarters are treated as ‘constrained’ inputs to the medium-term forecast. For inflation, Bank staff’s preferred short-term forecasting approach relies on a collection of unobserved component regressions, modelling the time series properties of detailed CPI components (Esady and Mate (forthcoming)). For GDP and labour market variables, a staggered-combination mixed-frequency (SC-MIDAS) model combining information from survey and official data is used (Moreira (2025)). A range of other statistical models are also used regularly as complements or cross-checks to these.
The Bank’s medium-term forecasts are constructed using a structural economic model as the central organising framework…
The remainder of the Bank’s three-year forecasts, also referred to as the medium-term forecasts, are anchored in a structural model. Unlike the data-driven statistical models used to pin down the near-term outlook, these models try to capture the behaviour and interactions among households, firms and other economic actors.
Standard economic theory posits that there are some structural features of the economy that help determine the broad interactions between the variables evaluated in this Report.
- Fluctuations in GDP growth can reflect supply- and/or demand-type ‘shocks’ that cause the level of activity to deviate from its underlying ‘potential’.
- Unemployment responds to cyclical changes in GDP via an ‘Okun’ relationship, as well as structural changes to the ‘natural’ (or non-inflationary) rate of unemployment.
- Wages respond to demand conditions via a wage ‘Phillips curve’, as well as a role for behavioural relationships between workers and firms, and structural factors.
- Inflation is affected by demand conditions via a price ‘Phillips curve’, as well as cost-push supply shocks and behavioural dynamics, with a key role for the labour market.
Monetary policy makers need to take account of these relationships to achieve the inflation target, although economic models differ in their treatment of these relationships and all are a simplification of reality. Regular evaluation of forecasts for these variables can help shed light on these relationships and their changing nature.
While the approach to modelling these interactions has evolved over time, the current central organising framework for the Bank’s forecasts is COMPASS, a medium-sized, open economy New Keynesian DSGE model providing a coherent framework for understanding macroeconomic relationships and policy effects (Burgess et al (2013)). COMPASS has been updated over time with the latest iteration including an improved modelling of the role of energy (Albuquerque et al (2025)).footnote [8]
…and incorporate forecasts from a range of other models that better capture certain features of the economy, alongside Bank staff and MPC judgement.
As no model can perfectly capture all relevant features of the economy, COMPASS is also augmented by a range of more specialised models. For example, COMPASS is supplemented by the ‘Post Transformation’ model. This takes outputs from COMPASS, along with other data inputs, and uses simple accounting and statistical relationships to produce forecasts for a wider set of variables, such as the unemployment rate.
A semi-structural model, known as the ‘sectoral model’, is used to capture the transmission of monetary and financial conditions to different sectors of the economy (Cloyne et al (2015)). Wage growth forecasts are informed by a suite of models linking them to their key economic drivers (Bank of England (2024)).
Forecasts for key supply-side variables are also produced using a combination of COMPASS and a suite of more targeted models. Given the UK is an open economy, Bank staff forecasts of international developments are another important input into the Bank’s UK forecast.
Finally, a range of cross-check models are also used regularly to help inform judgements and in some cases also interpret changes in the Bank’s forecasts (for example, Brignone and Piffer (2025)).
Table 2.A: Overview of how the Bank’s forecasts are constructed
Back data
Near-term forecast
Medium-term forecast
**Horizon **(a)
Data to** Q-2**
Q-1 and **Q0 **
**Q1 to Q12 **
Data
All quarterly series back to 1987 (some subject to revision).
Draws on monthly official published data (Q-1) and unofficial survey data (Q-1 and Q0).
Conditioning paths treated as ‘data’ for certain variables.
Models
Assessment of (unobservable) supply-side variables, informed by COMPASS and other targeted models.
Output gap filter models inform historic degree of spare capacity.
Statistical models using high-frequency data include: Unobserved Component models (for inflation) and SC-MIDAS models (for GDP growth and labour market variables).
Output gap filter models inform current degree of spare capacity.
Structural DSGE model (COMPASS), supplemented with suite of additional models (including Post Transformation model) and sector-specific modelling approaches.
For Q1, CPI inflation forecast based on near-term inflation forecast.
Judgement
Bank staff or MPC judgement used to complement forecasts by drawing on alternative models or accounting for factors the models don’t capture.
Footnotes
- (a) Where Qt represents quarter t and t=0 is the quarter in which the forecast is published.
2.2: The role of forecasts in the MPC’s policy processes
The forecasts are projections that a majority of the MPC agrees are reasonable baselines.
The Monetary Policy Report forecasts are a central projection, based on a staff proposal, that a majority of the MPC agrees are ‘reasonable baselines’. Previously, these forecasts represented the MPC’s ‘best collective judgement’ (Bailey (2025)). This status reflects recent changes to the MPC’s approach to monetary policy making under uncertainty (Dhami et al (2025)).
The central projections in the MPC’s reasonable baseline include profiles for CPI inflation, GDP growth, the unemployment rate, and the output gap (a measure of the level of actual output relative to potential). Indicative projections for a range of other variables (including wage growth) are produced by Bank staff to be broadly consistent with these core central projections.footnote [9]
As part of the Bank’s response to the Bernanke Review, forecasts are being increasingly supplemented by other inputs to policymaking.
The role of the forecasts in the MPC’s policymaking process has changed over time, most recently in response to the recommendations in the Bernanke Review. In the past, the forecast had been more central to the policymaking process, serving as the primary vehicle to help the MPC organise and consolidate its understanding of the economic outlook, as well as playing a central role in policy communications. Reflecting the need for greater flexibility in an uncertain environment, the MPC now emphasises that the forecast is one of several regular inputs to policymaking and communications, alongside a richer assessment of risks, uncertainty and alternative policy paths (Figure 1 in Dhami et al (2025)).
2.3: The importance of forecast evaluation
Forecast evaluation can lead to better-informed monetary policy decisions.
Given the relevant role the forecast still plays for monetary policy makers, regularly reviewing the differences between realised and projected outcomes – or forecast errors – is valuable (Lombardelli (2024)). By helping to identify the need for improvements to the Bank’s models, processes and judgements, as well as its understanding of the economy, regular forecast evaluation can lead to better informed monetary policy decisions. This is also important to support transparency around the policymaking process.
Forecast errors can arise from a range of sources and some are unavoidable, reflecting unforeseeable events.
Forecast evaluation can help to identify whether forecast errors have originated from unforeseeable circumstances, model shortcomings, or errors of judgement. This is important because not all errors imply problems with the forecast process. A significant proportion of errors can instead reflect unforeseeable events that hit after forecasts were made. An example from 2022 is when Russia’s invasion of Ukraine drove a significant rise in wholesale gas prices, leading to significantly higher inflation than forecast previously.
Systematic evaluation may prompt improvements in modelling and judgements…
Other forecast errors can stem from model mis-specification, for example using incorrect assumptions for a key parameter or economic mechanism. By flagging systematic errors in certain variables or relationships, forecast evaluation can help target upgrades to the Bank’s forecasting infrastructure. This may include investing in the main models feeding into the Bank’s forecasts or developing alternative models. Where model-based improvements are not immediately available, forecast evaluation can inform judgemental adjustments instead.
…or identify areas where the Bank’s understanding could be improved through the wider range of inputs to policy, including scenarios and assessment of risks.
Evaluation of the Bank’s forecasts can also improve understanding of how past economic shocks have propagated through the economy and to what extent they might continue to affect the economic outlook. Moreover, it can help to identify new trends in the economy such as changes in households’ or businesses’ behaviour. It can also help to flag emerging areas of uncertainty, which can in turn inform where scenarios or further exploration of risks could be particularly useful to support monetary policy decisions.
Various changes across time provide important context for the evaluation of the Bank’s forecasts.
A range of changes to the forecast process described throughout Section 2 mean that the forecasts evaluated in this Report are from a period when the Bank’s forecasts were both created and used differently.** Some of the **lessons from past forecast errors have also been incorporated in the Bank’s forecasting approach, as discussed in Sections 5 and 6 of this Report. While this is unlikely to impact the Report’s broad findings, it may mean that some of its results are no longer fully representative of the Bank’s forecasting capability today.
3: Overview of forecast evaluation approach
This Report evaluates the Bank’s forecasts of four key macroeconomic variables.
The Report evaluates the Bank’s ‘modal’footnote [10] point forecasts for year-on-year (or four-quarter) CPI inflation, GDP growth and wage growth,footnote [11] as well as the level of the unemployment rate.
These variables were chosen as they are of particular importance for monetary policy. The MPC’s target is for CPI inflation of 2%. GDP growth and the unemployment rate can help inform the assessment of whether spare capacity and associated inflationary pressures are building or fading. Wage growth has been monitored particularly closely in recent years, given the role of wages as a key driver of domestic inflationary pressures.
Bank staff have developed a new toolkit to support regular forecast evaluation.
To help embed systematic forecast evaluation, as well as support the production of this and future Forecast Evaluation Reports, Bank staff have developed a new forecast evaluation toolkit, detailed in an accompanying Macro Technical Paper (Abiry et al (2026)). The codebase and the underlying data are also being made available on the Bank of England’s public GitHub repository as an accessible package in the Python programming language.
The toolkit draws on best practice in data science to handle a large volume of data and forecast vintages, enabling evaluation of the Bank’s forecasts based on real-time information. This includes the ability to benchmark the Bank’s forecasts against a range of model-based forecasts produced on the basis of that same real-time information.footnote [12] The toolkit implements the full range of statistical evaluation techniques used throughout Section 4 and can be extended with other evaluation approaches, variables or benchmark model comparisons.
This Report seeks to draw lessons from analysis of both historical and recent errors.
Previous forecast evaluations have tended to either focus on long evaluation windows (Independent Evaluation Office (IEO) (2015)) or zoom in more narrowly on recent errors (such as those included in the Monetary Policy Report),footnote [13] but not both. This Report brings together insights from those two perspectives to draw appropriate lessons for the Bank’s forecast processes and modelling.
Section 4 establishes some longer-term statistical properties of the Bank’s forecasts over the past decade (2015–25). Section 5 deep-dives into more recent errors and their drivers, drawing on a range of more targeted approaches to highlight seven key findings.
The Report assesses historical forecast performance against traditional statistical evaluation metrics and alternative forecasts…
Longer-term forecast evaluation metrics can help to identify persistent issues with the forecast and place more recent errors in context. Section 4 draws on a range of standard statistical concepts of forecast accuracy, unbiasedness and efficiency to evaluate forecasts over 2015–25, at four horizons: the current quarter in which the Monetary Policy Report is published (‘Year 0’), and one, two and three years ahead (‘Year 1’, ‘Year 2’ and ‘Year 3’ forecasts). The latest forecast in scope for this evaluation is from August 2025. Technical details of these statistical approaches are laid out in Abiry et al (2026).
This Report also benchmarks the performance of the Bank’s forecast against a range of alternative model-based forecasts and projections from external forecasters (Box A). Comparison to these benchmarks helps to assess the performance of the Bank’s forecasts and identify whether they add value beyond alternative approaches. By analysing the relative performance of the Bank’s forecasts, this exercise also controls for the impact of any general increase in economic uncertainty on forecast errors.
Wherever possible, forecasts are compared with more ‘mature’ estimates of data, taken three years after the period in question. Where mature estimates are not yet available, the latest vintage of data is used. Official CPI inflation data do not change from first publication, but other variables can get revised as the Office for National Statistics (ONS) incorporates lagged source information and methodological improvements.footnote [14] Revisions are typically hard to predict (Robinson (2016)) and policy needs to be set on the basis of real-time information. But three-year-old estimates are preferred for this evaluation because they provide a more definitive picture of the economy, and so a better basis for retrospectively evaluating the sources of inflation forecast misses. The impact of data revisions on forecast performance is explored further in Section 5.
The Covid-19 period is included in results unless otherwise stated. As GDP growth was exceptionally volatile during the Covid pandemic, the period from 2020 Q1 to 2022 Q2 is generally excluded from GDP evaluation metrics.
…and draws flexibly on other techniques, like model-based counterfactuals, to shed light on the drivers of recent errors.
An assessment of more recent errors cannot rely solely on statistical tests which require longer samples to produce meaningful results. Some conclusions will also be tentative, since more recent data outturns are more prone to revision. There is also unlikely to be a one-size-fits-all approach to investigating recent errors, with the most appropriate methods depending on the variable and horizon of interest, or the shocks at play.
Section 5 therefore considers a range of complementary approaches to analyse more recent errors. The section focuses on the period since mid-2021, which saw sharp rises in energy prices and increases in Bank Rate shortly after. The analysis includes model-based ‘counterfactuals’ to estimate the role of surprises to, or changes to the treatment of, conditioning paths such as energy prices. The section also draws on alternative models and staff analysis to discuss possible lessons from forecast errors, where these errors may have resulted from mechanisms underpinning the forecast or from mechanisms that may not be fully captured by standard forecast models. The role of other factors such as data revisions is also considered briefly.
Box A: Sources of alternative forecasts
This box summarises a range of alternative forecasts used as benchmarks for the Bank’s forecasts in Section 4 of this Report. Those include an AR model, a BVAR model and a raw version of COMPASS, as well as a survey of external forecasters. Table A.1 summarises the approaches.
Table A.1: Description of alternative forecast approaches
| | AR model | BVAR model | COMPASS model | External Forecasters | | | ———— | ––––––– | —————– | ———————— | | Description | Statistical approach to predict future values of a variable based only on optimally estimated number of lags. | Models contemporaneous and lagged relationships between multiple variables. | Medium-size DSGE model, it forms the core of the forecast infrastructure in the Bank. | Forecasts from outside organisations, including market participants and international institutions. | | Variable outputs | One model for each of GDP growth, CPI inflation, unemployment rate and wage growth. | 20 variables including GDP growth, CPI inflation, and wage growth. | 19 variables including GDP growth, CPI inflation, and wage growth. | GDP growth, CPI inflation and unemployment rate. | | Estimation approach | Maximum likelihood estimation with t-distributed errors. | Bayesian estimation. | Bayesian estimation with some calibration. | Average of each forecaster’s central expectation. | | Estimation window | Starting with 1997 Q1–2014 Q4. Expands by one quarter at each step. | Starting with 1997 Q1–2014 Q4. Expands by one quarter at each step. | Starting with 1987 Q2–2014 Q4. Expands by one quarter at each step. | n.a. | | Retrospective forecast approach | Parameters re-estimated quarterly using data available at each Monetary Policy Report. | Uses data and conditioning paths consistent with each Monetary Policy Report; parameters re-estimated quarterly. | Uses data and conditioning paths consistent with each Monetary Policy Report; trends and parameters re-estimated quarterly. | Forecasts recorded in real-time alongside Monetary Policy Report. | | References | Abiry et al (2026). | Model follows approach by Giannone et al (2015); referenced in Pill (2024). | Albuquerque et al (2025). | For example, Annex: other forecasters’ expectations of the August 2025 Monetary Policy Report. |
4: Longer-term statistical properties of the Bank’s forecasts
This section analyses the statistical properties of the Bank’s forecasts over the past decade. It analyses forecast performance along three common dimensions considered in the forecast evaluation literature: accuracy, unbiasedness, and efficiency. It also benchmarks the Bank’s forecasts against a range of alternative model-based and external forecasts.
4.1: Forecast accuracy
Accuracy relates to how close forecasts have been to outturns.
A standard accuracy metric is the ‘root mean squared error’ (RMSE), which captures the size of a typical forecast error.footnote [15] Table 4.A shows RMSEs for CPI inflation, GDP growth, wage growth and the unemployment rate over 2015–25 (top panel) and split into pre- and post-Covid samples (bottom panel).
Unsurprisingly, RMSEs generally rise at longer forecast horizons. This is expected as near-term forecasts incorporate available high-frequency information via statistical models and judgement, while medium-term forecasting is made harder by unanticipated shocks and wider uncertainties about key economic relationships and dynamics.
The Bank’s forecasts have become notably less accurate post-Covid, in part as economic volatility has also increased.
Comparing the pre- and post-Covid periods, the scale of errors has also increased across the board. The RMSE of one-year ahead inflation forecasts, for example, was 0.6 percentage points pre-Covid, compared to 3.7 percentage points post-Covid. Forecasts for GDP growth and wage growth have also become less accurate on average.
Larger forecast errors in the post-Covid period partly reflect heightened economic volatility, which has made forecasting more challenging. A further challenge over recent years relates to economic measurement. Smaller samples in the ONS’s Labour Force Survey have led to more volatile and uncertain labour market statistics. Meanwhile, some unusually large revisions to GDP have also contributed to larger post-Covid forecast errors, when measured against mature (three-years-old) data. This is discussed further in Section 5.
Table 4.A: Forecast accuracy
Root mean squared errors (2015–25) (a) (b)
Year 0
Year 1
Year 2
Year 3
**CPI inflation **
0.2
2.4
3.3
3.4
GDP growth
1.5
5.1
6.9
7.4
**Wage growth **
1.2
2.3
2.7
2.6
Unemployment rate
0.8
0.9
0.8
0.8
RMSE (2015–19)
Year 0
Year 1
Year 2
Year 3
CPI inflation
0.1
0.6
0.6
0.5
GDP growth
0.3
0.5
0.6
0.8
**Wage growth **
0.5
0.9
1.2
1.1
Unemployment rate
0.2
0.6
0.9
1.1
RMSE (2022–25)
Year 0
Year 1
Year 2
Year 3
CPI inflation
0.2
3.7
4.9
4.8
GDP growth
1.3
1.5
2.8
2.7
**Wage growth **
1.1
3.1
3.4
2.9
Unemployment rate
0.2
0.7
0.7
0.5
Footnotes
- Sources: ONS and Bank calculations.
- (a) Table shows RMSEs of the Bank’s forecasts. Figures shown have been calculated using forecasts for the unemployment rate and four-quarter growth in real GDP, CPI inflation and aggregate whole-economy total wage growth. Top panel reports RMSEs calculated for data over 2015–25 for forecasts from February 2015–August 2025, including the Covid pandemic period. Bottom panel shows RMSEs calculated over two different samples: 2015–19 (using forecasts from February 2015–November 2019) and 2022–25 (using forecasts from February 2022–August 2025). RMSEs are not directly comparable between variables as relative forecast accuracy is also affected by the relative volatility of each indicator. Forecast errors are computed using the data vintage available three years after the first available data within the quarter.
- (b) Figures for Year 0 represent the performance of GDP, CPI inflation and wage growth estimates in the four quarters to the quarter in which the forecast was published (Quarter 0) and the unemployment rate as at that quarter. Years 1, 2 and 3 represent the same for quarters 4, 8 and 12 of the forecast, respectively.
Alternative forecasts provide a useful relative performance benchmark.
To test whether the accuracy of the Bank’s forecasts could have been improved easily, it is helpful to compare them