About
Policy Hub: Macroblog provides concise commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues for a broad audience.
Authors for Policy Hub: Macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.
Comment Standards:
Comments are moderated and will not appear until the moderator has approved them.
Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.
In addition, no off-topic remarks or spam is permitted.
Useful Links
February 13, 2018
GDPNow's Forecast: Why Did It Spike Recently?
If you felt whipsawed by GDPNow recently, it's understandable. On February 1, the Atlanta Fed's GDPNow model estimate of first-quarter real gross domestic product (GDP) growth surged from 4.2 percent to 5.4 percent (annualized rates) after a manufacturing report from the Institute for Supply Management. GDPNow's estimate then fell to 4.0 percent on February 2 after the employment report
from the U.S. Bureau of Labor Statistics. GDPNow displayed a similar undulating pattern early in the forecast cycle for fourth-quarter GDP growth.
What accounted for these sawtooth patterns? The answer lies in the treatment of the ISM manufacturing release. To forecast the yet-to-be released monthly GDP source data apart from inventories, GDPNow uses an indicator of growth in economic activity from a statistical model called a dynamic factor model. The factor is estimated from 127 monthly macroeconomic indicators, many of which are used to estimate the Chicago Fed National Activity Index (CFNAI). Indices like these can be helpful for forecasting macroeconomic data, as demonstrated here
and here
.
Perhaps not surprisingly, the CFNAI and the GDPNow factor are highly correlated, as the red and blue lines in the chart below indicate. Both indices, which are normalized to have an average of 0 and a standard deviation of 1, are usually lower in recessions than expansions.
A major difference in the indices is how yet-to-be-released values are handled for months in the recent past that have reported values for some, but not all, of the source data. For example, on February 2, January 2018 values had been released for data from the ISM manufacturing and employment reports but not from the industrial production or retail sales reports. The CFNAI is released around the end of each month when about two-thirds of the 85 indicators used to construct it have reported values for the previous month. For the remaining indicators, the Chicago Fed fills in statistical model forecasts for unreported values. In contrast, the GDPNow factor is updated continuously and extended a month after each ISM manufacturing release. On the dates of the ISM releases, around 17 of the 127 indicators GDPNow uses have reported values for the previous month, with six coming from the ISM manufacturing report.
For months with partially missing data, GDPNow updates its factor with an approach similar to the one used in a 2008 paper by economists Domenico Giannone, Lucrezia Reichlin and David Small. That paper describes a dynamic factor model used to nowcast GDP growth similar to the one that generates the New York Fed's staff nowcast of GDP growth
. In the Atlanta Fed's GDPNow factor model, the last month of ISM manufacturing data have large weights when calculating the terminal factor value right after the ISM report. These ISM weights decrease significantly after the employment report, when about 50 of the indicators have reported values for the last month of data.
In the above figure, we see that the January 2018 GDPNow factor reading was 1.37 after the February 1 ISM release, the strongest reading since 1994 and well above either its forecasted value of 0.42 prior to the ISM release or its estimated value of 0.43 after the February 2 employment release. The aforementioned rise and decline in the GDPNow forecast of first-quarter growth is largely a function of the rise and decline in the January 2018 estimates of the dynamic factor.
Although the January 2018 reading of 59.2 for the composite ISM purchasing managers index (PMI) was higher than any reading from 2005 to 2016, it was little different than either a consensus forecast from professional economists (58.8) or the forecast from a simple model (58.9) that uses the strong reading in December 2017 (59.3). Moreover, it was well above the reading the GDPNow dynamic factor model was expecting (54.5).
A possible shortcoming of the GDPNow factor model is that it does not account for the previous month's forecast errors when forecasting the 127 indicators. For example, the predicted composite ISM PMI reading of 54.4 in December 2017 was nearly 5 points lower than the actual value. For this discussion, let's adjust GDPNow's factor model to account for these forecast errors and consider a forecast evaluation period with revised current vintage data after 1999. Then, the average absolute error of the 85–90 day-ahead adjusted model forecasts of GDP growth after ISM manufacturing releases (1.40 percentage points) is lower than the average absolute forecast error on those same dates for the standard version of GDPNow (1.49 percentage points). Moreover, the forecasts using the adjusted factor model are significantly more accurate than the GDPNow forecasts, according to a standard statistical test . If we decide to incorporate adjustments to GDPNow's factor model, we will do so at an initial forecast of quarterly GDP growth and note the change here
.
Would the adjustment have made a big difference in the initial first-quarter GDP forecast? The February 1 GDP growth forecast of GDPNow with the adjusted factor model was "only" 4.7 percent. Its current (February 9) forecast of first-quarter GDP growth was the same as the standard version of GDPNow: 4.0 percent. These estimates are still much higher than both the recent trend in GDP growth and the median forecast of 3.0 percent from the Philadelphia Fed's Survey of Professional Forecasters (SPF).
Most of the difference between the GDPNow and SPF forecasts of GDP growth is the result of inventories. GDPNow anticipates inventories will contribute 1.2 percentage points to first-quarter growth, and the median SPF projection implies an inventory contribution of only 0.4 percentage points. It's not unusual to see some disagreement between these inventory forecasts and it wouldn't be surprising if one—or both—of them turn out to be off the mark.
January 18, 2018
How Low Is the Unemployment Rate, Really?
In 2017, the unemployment rate averaged 4.4 percent. That's quite low on a historical basis. In fact, it's the lowest level since 2000, when unemployment averaged 4.0 percent. But does that mean that the labor market is only 0.4 percentage points away from being as strong as it was in 2000? Probably not. Let's talk about why.
As observed by economist George Perry in 1970, although movement in the aggregate unemployment rate is mostly the result of changes in unemployment rates within demographic groups, demographic shifts can also change the overall unemployment rate even if unemployment within demographic groups has not changed. Adjusting for demographic changes makes for a better apples-to-apples comparison of unemployment today with past rates.
Three large demographic shifts underway since the early 2000s are the rise in the average age and educational attainment of the labor force, and the decline in the share who are white and non-Hispanic. These changes are potentially important because older workers and those with more education have lower rates of unemployment across age and education groups respectively, and white non-Hispanics tend to have lower rates of unemployment than other ethnicities.
The following chart shows the results of a demographic adjustment that jointly controls for year-to-year changes in two sex, three education, four race/ethnicity, and six age labor force groups, (see here for more details). Relative to the year 2000, the unemployment rate in 2017 is about 0.6 percentage points lower than it would have been otherwise simply because the demographic composition of the labor force has changed (depicted by the blue line in the chart).
In other words, even though the 2017 unemployment rate is only 0.4 percentage points higher than in 2000, the demographically adjusted unemployment rate (the green line in the chart) is 1.0 percentage points higher. In terms of unemployment, after adjusting for changes in the composition of the labor force, we are not as close to the 2000 level as you might have thought.
The demographic discrepancy is even larger for the broader U6 measure of unemployment, which includes marginally attached and involuntarily part-time workers. The 2017 demographically adjusted U6 rate is 2.5 percentage points higher than in 2000, whereas the unadjusted U6 rate is only 1.5 percentage points higher. That is, on a demographically adjusted basis, the economy had an even larger share of marginally attached and involuntarily part-time workers in 2017 than in 2000.
The point here is that when comparing unemployment rates over long periods, it's advisable to use a measure that is reasonably insulated from demographic changes. However, you should also keep in mind that demographics are only one of several factors that can cause fluctuation. Changes in labor market and social policies, the mix of industries, as well as changes in the technology of how people find work can also result in changes to how labor markets function. This is one reason why estimates of the so-called natural rate of unemployment are quite uncertain and subject to revision. For example, participants at the December 2012 Federal Open Market Committee meeting had estimates for the unemployment rate that would prevail over the longer run ranging from 5.2 to 6.0 percent. At the December 2017 meeting, the range of estimates was almost a whole percentage point lower at 4.3 to 5.0 percent.
January 17, 2018
What Businesses Said about Tax Reform
Many folks are wondering what impact the Tax Cuts and Jobs Act—which was introduced in the House on November 2, 2017, and signed into law a few days before Christmas—will have on the U.S. economy. Well, in a recent speech, Atlanta Fed president Raphael Bostic had this to say: "I'm marking in a positive, but modest, boost to my near-term GDP [gross domestic product] growth profile for the coming year."
Why the measured approach? That might be our fault. As part of President Bostic's research team, we've been curious about the potential impact of this legislation for a while now, especially on how firms were responding to expected policy changes. Back in November 2016 (the week of the election, actually), we started asking firms in our Sixth District Business Inflation Expectations (BIE) survey how optimistic they were (on a 0–100 scale) about the prospects for the U.S. economy and their own firm's financial prospects. We've repeated this special question in three subsequent surveys. For a cleaner, apples-to-apples approach, the charts below show only the results for firms that responded in each survey (though the overall picture is very similar).
As the charts show, firms have become more optimistic about the prospects for the U.S. economy since November 2016, but not since February 2017, and we didn't detect much of a difference in December 2017, after the details of the tax plan became clearer. But optimism is a vague concept and may not necessarily translate into actions that firms could take that would boost overall GDP—namely, increasing capital investment and hiring.
In November, we had two surveys in the field—our BIE survey (undertaken at the beginning of the month) and a national survey conducted jointly by the Atlanta Fed, Nick Bloom of Stanford University, and Steven Davis of the University of Chicago. (That survey was in the field November 13–24.) In both of these surveys, we asked firms how the pending legislation would affect their capital expenditure plans for 2018. In the BIE survey, we also asked how tax reform would affect hiring plans.
The upshot? The typical firm isn't planning on a whole lot of additional capital spending or hiring.
In our national survey, roughly two-thirds of respondents indicated that the tax reform hasn't enticed them into changing their investment plans for 2018, as the following chart shows.
The chart below also makes apparent that small firms (fewer than 100 employees) are more likely to significantly ramp up capital investment in 2018 than midsize and larger firms.
For our regional BIE survey, the capital investment results were similar (you can see them here). And as for hiring, the typical firm doesn't appear to be changing its plans. Interestingly, here too, smaller firms were more likely to say they'd ramp up hiring. Among larger firms (more than 100 employees), nearly 70 percent indicated that they'd leave their hiring plans unchanged.
One interpretation of these survey results is that the potential for a sharp acceleration in GDP growth is limited. And that's also how President Bostic described things in his January 8 speech: "For now, I am treating a more substantial breakout of tax-reform-related growth as an upside risk to my outlook."
September 7, 2017
What Is the "Right" Policy Rate?
What is the right monetary policy rate? The Cleveland Fed, via Michael Derby in the Wall Street Journal, provides one answer—or rather, one set of answers:
The various flavors of monetary policy rules now out there offer formulas that suggest an ideal setting for policy based on economic variables. The best known of these is the Taylor Rule, named for Stanford University's John Taylor, its author. Economists have produced numerous variations on the Taylor Rule that don't always offer a similar story...
There is no agreement in the research literature on a single "best" rule, and different rules can sometimes generate very different values for the federal funds rate, both for the present and for the future, the Cleveland Fed said. Looking across multiple economic forecasts helps to capture some of the uncertainty surrounding the economic outlook and, by extension, monetary policy prospects.
Agreed, and this is the philosophy behind both the Cleveland Fed's calculations based on Seven Simple Monetary Policy Rules and our own Taylor Rule Utility. These two tools complement one another nicely: Cleveland's version emphasizes forecasts for the federal funds rate over different rules and Atlanta's utility focuses on the current setting of the rate over a (different, but overlapping) set of rules for a variety of the key variables that appear in the Taylor Rule (namely, the resource gap, the inflation gap, and the "neutral" policy rate). We update the Taylor Rule Utility twice a month after Consumer Price Index
and Personal Income and Outlays
reports and use a variety of survey- and model-based nowcasts to fill in yet-to-be released source data for the latest quarter.
We're introducing an enhancement to our Taylor Rule utility page, a "heatmap" that allows the construction of a color-coded view of Taylor Rule prescriptions (relative to a selected benchmark) for five different measures of the resource gap and five different measures of the neutral policy rate. We find the heatmap is a useful way to quickly compare the actual fed funds rate with current prescriptions for the rate from a relatively large number of rules.
In constructing the heatmap, users have options on measuring the inflation gap and setting the value of the "smoothing parameter" in the policy rule, as well establishing the weight placed on the resource gap and the benchmark against which the policy rule is compared. (The inflation gap is the difference between actual inflation and the Federal Open Market Committee's 2 percent longer-term objective. The smoothing parameter is the degree to which the rule is inertial, meaning that it puts weight on maintaining the fed funds rate at its previous value.)
For example, assume we (a) measure inflation using the four-quarter change in the core personal consumption expenditures price index; (b) put a weight of 1 on the resource gap (that is, specify the rule so that a percentage point change in the resource gap implies a 1 percentage point change in the rule's prescribed rate); and (c) specify that the policy rule is not inertial (that is, it places no weight on last period's policy rate). Below is the heatmap corresponding to this policy rule specification, comparing the rules prescription to the current midpoint of the fed funds rate target range:
We should note that all of the terms in the heatmap are described in detail in the "Overview of Data" and "Detailed Description of Data" tabs on the Taylor Rule Utility page. In short, U-3 (the standard unemployment rate) and U-6 are measures of labor underutilization defined here. We introduced ZPOP, the utilization-to-population ratio, in this macroblog post. "Emp-Pop" is the employment-population ratio. The natural (real) interest rate is denoted by r*. The abbreviations for the last three row labels denote estimates of r* from Kathryn Holston, Thomas Laubach, and John C. Williams
,
Thomas Laubach and John C. Williams
,
and Thomas Lubik and Christian Matthes
.
The color coding (described on the webpage) should be somewhat intuitive. Shades of red mean the midpoint of the current policy rate range is at least 25 basis points above the rule prescription, shades of green mean that the midpoint is more than 25 basis points below the prescription, and shades of white mean the midpoint is within 25 basis points of the rule.
The heatmap above has "variations on the Taylor Rule that don't always offer a similar story" because the colors range from a shade of red to shades of green. But certain themes do emerge. If, for example, you believe that the neutral real rate of interest is quite low (the Laubach-Williams and Lubik-Mathes estimates in the bottom two rows are −0.22 and −0.06) your belief about the magnitude of the resource gap would be critical to determining whether this particular rule suggests that the policy rate is already too high, has a bit more room to increase, or is just about right. On the other hand, if you are an adherent of the original Taylor Rule and its assumption that a long-run neutral rate of 2 percent (the top row of the chart) is the right way to think about policy, there isn't much ambiguity to the conclusion that the current rate is well below what the rule indicates.
"[D]ifferent rules can sometimes generate very different values for the federal funds rate, both for the present and for the future." Indeed.
Macroblog Search
Recent Posts
Categories
- Africa
- Americas
- Asia
- Australia
- Banking
- Books
- Business Cycles
- Business Inflation Expectations
- Capital and Investment
- Capital Markets
- Data Releases
- Deficits
- Deflation
- Economic conditions
- Economic Growth and Development
- Education
- Employment
- Energy
- Europe
- Exchange Rates and the Dollar
- Fed Funds Futures
- Federal Debt and Deficits
- Federal Reserve and Monetary Policy
- Financial System
- Fiscal Policy
- Forecasts
- GDP
- Health Care
- Housing
- Immigration
- Inequality
- Inflation
- Inflation Expectations
- Interest Rates
- Katrina
- Labor Markets
- Latin AmericaSouth America
- macroeconomics
- Monetary Policy
- Money Markets
- Pricing
- Productivity
- Real Estate
- Regulation
- SarbanesOxley
- Saving Capital and Investment
- Small Business
- Social Security
- Sports
- Surveys
- Taxes
- Technology
- This That and the Other
- Trade
- Trade Deficit
- Uncategorized
- Unemployment
- Wage Growth
- WebTech