On the forecast combination puzzle

Wei Qian, Craig A. Rolling, Gang Cheng, Yuhong Yang

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

It is often reported in the forecast combination literature that a simple average of candidate forecasts is more robust than sophisticated combining methods. This phenomenon is usually referred to as the “forecast combination puzzle”. Motivated by this puzzle, we explore its possible explanations, including high variance in estimating the target optimal weights (estimation error), invalid weighting formulas, and model/candidate screening before combination. We show that the existing understanding of the puzzle should be complemented by the distinction of different forecast combination scenarios known as combining for adaptation and combining for improvement. Applying combining methods without considering the underlying scenario can itself cause the puzzle. Based on our new understandings, both simulations and real data evaluations are conducted to illustrate the causes of the puzzle. We further propose a multi-level AFTER strategy that can integrate the strengths of different combining methods and adapt intelligently to the underlying scenario. In particular, by treating the simple average as a candidate forecast, the proposed strategy is shown to reduce the heavy cost of estimation error and, to a large extent, mitigate the puzzle.

Original languageEnglish (US)
Article number39
JournalEconometrics
Volume7
Issue number3
DOIs
StatePublished - Sep 2019

Bibliographical note

Funding Information:
Funding: W. Qian’s research is partially supported by the NSF Grant DMS-1916376 and by the JPMC Faculty Fellowship.

Publisher Copyright:
© 2019 by the authors. Licensee MDPI, Basel, Switzerland.

Keywords

  • Combining for adaptation
  • Combining for improvement
  • Model selection
  • Multi-level AFTER
  • Structural break

Fingerprint

Dive into the research topics of 'On the forecast combination puzzle'. Together they form a unique fingerprint.

Cite this