I restrict my attention to quarterly forecasts of GDP growth. Between 2004 and 2007, 27% of the predictions were within 0.5 percentage points of the actual outcome (see Table 1), whereas 56% (27% + 29%) were within one percentage point. Or, if you’re a member of the empty-glass club, 44% missed the target by more than one point.
Table 1. Distribution of accuracy of forecasts (2004-2007)
|0.5 percentage points (p.p.) of actual value||27.2|
|0.5 - 1 p.p.||28.9|
|1 - 1.5 p.p.||22.6|
|1.5 - 2 p.p.||13.7|
|2 - 2.5 p.p.||5.2|
|over 2.5 p.p.||2.4|
To pit forecasters against each other I use the Root Mean Squared Error (RMSE), a one-number summary of the deviations of several forecasts. The RMSE punishes both positive and negative deviations equally, but penalizes big errors proportionally more than small ones*. I also can use it to form confidence intervals.
According to the RMSE measure, the most accurate forecaster is Gary Thayer, of the firm A.G. Edwards, although he no longer participates in the survey. The second most-accurate forecaster, and still in the panel, is Gene Huang of FedEx. (See Table 2.) The best forecaster is able to predict GDP growth within 1.67 percentage points, at a 90% level of confidence. (That means that if he posted 100 forecasts, 90 of them would deviate from the actual GDP growth rate by plus or minus 1.67 percentage points.)
Table 2. Top-20 WSJ forecasters, by Root Mean Squared Error (RMSE)
|1||Gary Thayer*||A.G. Edwards||0.95||1.67|
|2||Gene Huang||FedEx Corp.||0.98||1.72|
|3||David Resler||Nomura Securities International||1.02||1.79|
|4||Stuart Hoffman*||PNC Financial Services Group||1.03||1.82|
|5||Allen Sinai||Decision Economics Inc.||1.05||1.83|
|7||Nicholas S. Perna||Perna Associates||1.05||1.85|
|8||Dana Johnson||Comerica Bank||1.06||1.88|
|9||J. Prakken and C. Varvares||Macroeconomic Advisers||1.06||1.86|
|10||R. Berner and D. Greenlaw*||Morgan Stanley||1.07||1.87|
|11||Nairmen Behravesh||Global Insight||1.09||1.90|
|12||Robert DiClemente*||Citibank SSB||1.09||1.92|
|13||John Lonski||Moody's Investors Service||1.09||1.91|
|14||Scott Anderson||Wells Fargo & Co.||1.11||1.97|
|15||Douglas Duncan||Mortgage Bankers Association||1.12||1.97|
|16||David Rosenberg||Merrill Lynch||1.13||1.97|
|17||Diane Swonk||Mesirow Financial||1.13||1.99|
|18||David Lereah*||National Association of Realtors||1.13||1.99|
|20||Paul Kasriel||The Northern Trust||1.15||2.02|
*Not in WSJ group of forecasts anymore, as of November 2007.
It is well known that, over time, a group’s forecast is closer to the mark than almost any particular individual’s. Among the WSJ panel it’s no different: the median forecast is sixth in the ranking, out of 47. The same conclusion applies to the average forecast (average and median are very close to the each other in every release of the WSJ survey).
The top participants in the group hold but a tiny advantage over the rest. Even the 20th most accurate person has a margin of error of just over 2 percentage points, versus 1.67 points for the top forecaster. It’s not surprising then that rankings tend to change frequently. For example, at the end of 2006 the top five forecasters were (latest ranking in parentheses): Thayer (1), Rosenberg (17), Perna (8), Sinai (5) and Lonski (14).
Catching a “hot streak” seems to be exceedingly difficult too. Suppose that we define “winning” as being among the 50% most-accurate accurate forecasts for a given quarter. (A rather modest victory, may I say.) By that measure, only 37% of successes were followed by a second win, 31% of two-in-a-row’s were followed by a third success, and just 17% of those were followed by a fourth one.
Can a simple predictor outperform the pros? Michael Bryan of the Federal Reserve of Cleveland, whose commentary I follow in this post, asks that question. He compares the predictions in the Survey of Professional Forecasters (SPF) with the naïve forecast that next period’s outcome will be the same as the latest observed outcome. In terms of my data, that is the prediction that GDP growth in, say, 2008:Q1 will be the same as in 2007:Q4.
Bryan finds that 53% of economists made worse predictions than the naïve forecast. The WSJ panel shows much better marksmanship. All of them performed better than the naïve forecast, except one. (The exception is James F. Smith of Western Carolina University, and by a long shot. His RMSE is 2.83, whereas that of the naïve forecast is 1.89. Compare with the values in Table 2.)
By Clay Bennett
In five days we will have an advance estimate of how much the economy grew during the first quarter. The naïve forecast indicates 0.6 percentage points. The median WSJ forecast in April is exactly zero —neither cold nor hot, as a friend of mine likes to say. Gene Huang, the top forecaster of the hour, says it will be 0.8%. Which one will be closer to the mark?
* For the ranking of forecasters by RMSE, I include all the participants in the survey who submitted at least ten GDP forecasts between May of 2004 and December of 2007. I only include the predictions submitted at the beginning of the months of February, May, August and November for quarters Q1, Q2, Q3 and Q4, respectively. “Actual” GDP growth is taken to be the advance estimate, released one month after the end of the corresponding quarter. Given the timing of the forecasts and of the advance releases of GDP growth, each forecast appeared about three months before the actual outcome was known.
Subscribe to EconWeekly
(Clicking that button gives $0.10 to Francisco. Join tipjoy! How does tipping work?)
economics, forecasts, forecasters, predictions, macroeconomics