Thursday, March 26, 2009

An Economic Turning Point? It’s not in the data yet.

Earlier this month Trichet, President of the ECB, said central banks are beginning to see the signs of an economic turning point. If true, they are certainly not alone. An increasing number of economists are reading the tea leaves and calling for a bottom.

I’ll admit there has been some good news, especially in the household sector. Retail sales in January and February were both better than expected with the January data being downright good. My estimate of real retail sales puts the level of sales in February up 1 percent from December. Whatever happens going forward, two months of respite from plummeting sales and abysmal consumption is a relief.

Housing also bounced back somewhat from January’s horrific freefall. According to the FHFA house price series (formerly known as the OFHEO house price index), quality adjusted housing prices rose 1.7 percent in February. Housing starts, especially multifamily starts, rose nicely and this week we learned that new home sales rose 4.6 percent. Each of these series has measurement issues and volatility issues but I think we can safely say that February was a good month in the housing market.

Enough cheer. My (self-appointed) lot in life is to put good news in perspective.

In my view, while we may be at a bottom, there is no indication of it in the current data. The labor market has not turned around, indeed it seems to be getting steadily worse. We now have initial claims data through the 21st of March. Claims are hovering around 650,000. And worse, while initial claims have hit a plateau, continuing claims have continued to grow, they have increased almost 10 percent in the last four weeks alone. Continuing claims are now 20 percent above their previous-episode highs. That continuing claims are outpacing initial claims indicates falling matching rates: if you lose your job, it is much harder to get a new one.

Plain and simple, the household sector cannot recover with this level of job losses and the business sector cannot recover without a healthy households sector. Again we saw some upticks (from record low levels) in most business survey data. None of it pointed to expansion and none of it is sustainable.

Nonetheless, if household and employment data were all I had in hand, I might believe the bottom is near. But, in addition to bad IP numbers in the United States, global production appears to remain in freefall. IP in Europe fell at a record pace in January and the February fall in the German IFO survey indicates it is not done yet. And, of course, Asian data continues to be absolutely horrific.

Krugman posted a picture of U.S. IP in the current recession against IP in the Great Depression. No surprise, the Depression won by a landslide. I replicated his plot using Japanese IP in the current episode against U.S. IP data in great depression: Japan is almost 20 percentage points ahead.

February will be even worse in Japan. Take a look at this picture of Japanese exports. At the peak, exports accounted for 15 percent of Japanese GDP. The decline in trade is unbelievable. Macroeconomic data series simply do not look like this.
As I have said before, most of Japan’s exports go to the United States, Europe, and China. This decline is an extremely negative indicator for forward looking demand in these regions. By the way, this decline is not driven by motor vehicles—they contribute, but in nominal terms every major category of Japanese exports is down 40 percent (yoy).

In an even worse indicator for Japanese production, real imports are now falling at a comparable rate. Without inputs, factories cannot operate.
Remember the Asian Financial Crisis? It was 1997 and Asia was falling to pieces. With the data over the last few months, the Asian Financial Crisis barely even shows up as noise on charts.
Asian (and lots of other) data is trying to tell us we are not done with this recession yet. By not listening, we only forestall adjustments that need to be made.

Tuesday, March 24, 2009

Fiscal Stimulus and Christina Romer: Once More into the Breach

The debate over the effectiveness of fiscal stimulus continues to rage. I have previously given my views on the likely efficacy of fiscal stimulus (Does the multiplier have to be one? and Another stab at fiscal neutrality): the multiplier is almost certainly less than one, is likely close to zero, and may even be negative.

Christina Romer, the head of the President’s Council of Economic Advisers, disagrees and she gave a forceful defense of fiscal stimulus in a recent speech. She has many reasons supporting her view, but basically she believes that any increase in government spending boosts output. This belief is predicated on the notion that the government can increase its demand for goods and services without any negative effect on other parts of the economy: she believes changes in government spending do not change prices, neither goods prices nor interest rates. Further, she appears to believe that the composition of government spending does not matter for this result to hold: any fiscal expenditure boosts output.

There is a broad literature on this topic. Many economists have attempted to estimate the multiplier. And, the vast majority of these studies, including several by the IMF, have found very small and often statistically insignificant effects. That is, most studies have found fiscal multipliers close to zero.

Romer is aware of this literature but attributes the empirical findings to omitted variable bias. In simple terms, fiscal policy is only implemented when the economy is expected to weaken. The results then simply fail to account for how much worse the economy would have been in the absence of the fiscal stimulus. Omitted variable bias is an endemic problem in empirical studies; I lack Romer’s faith in the sign of the bias.

Romer takes a three-pronged approach to defending her position: she highlights the magnitude of the increase in government demand, she draws inferences from her recent work with David Romer, and she uses the results of large-scale macro models. I take each of these in turn.

The Magnitude of Government Spending

Romer notes that the administrations stimulus plan is the large (unless noted, all quotes are from Romer’s March 3 speech:

It is simply the biggest, boldest countercyclical fiscal stimulus in American
history. One way to see this is to compare it with Franklin Roosevelt’s New Deal. In the biggest year of the New Deal, 1934, the fiscal expansion was about 1½% of GDP. And this expansion was followed the very next year by a cutback of almost the same size. In contrast, the act that was just passed provides fiscal stimulus of close to 3% of GDP in each of 2009 and 2010.
And her assessment of the program includes none of the financial rescue packages. With the Fed spending about $1.3 trillion (having already spent close to 2) and the Treasury implementing a new trillion dollar toxic asset program, total spending on stimulus is remarkably large, arguably closer to 25 percent of GDP rather than 3. In Romer’s view, this government spending cannot help but stimulate economic output.

Romer clearly believes that all government spending is stimulus. In her introduction, she makes the case over the timing of the bill rather than composition. I have already spoken on the content of the bill. But, here I find an inconsistency in Romer’s views. If all government spending increases output more than one-for-one, rebate checks should also have a larger multiplier.

After all, what is the difference between sending somebody a check for $1,000 and hiring them for 10 hours at $100 an hour? Both cases entail the same transfer of resources. Romer, however, differentiates these cases. In the former, case she believes the multiplier is close to 0.3 in the latter it is close to 1.5 (Bernstein and Romer 2009). Yet, while both examples entail a transfer of $1000 from the Federal government to private individuals, in the latter case, the government has reduced the total labor supply available to the private sector by 10 hours, placing upward pressure on wages. Higher wages mean less private employment.

Take this another way, Romer states “all of an increase in government purchases goes into spending, whereas only some fraction of a tax cut is spent.” This is a partial equilibrium statement. Think of the government’s budget constraint. To spend money the government must either raise current taxes or it must borrow money; either way, it must take cash from the private sector to spend (either in the form of a bond or in the form of taxes). Assume it is debt finance, the private investor (who has his own budget constraint) must either reduce consumption or reduce investment in some other project: He has less money net of his purchase of the government bond. This indicates a multiplier no greater than one.

Now think, the government uses the case to purchase something. Where does the government get the item? It must have bought the good and it must have paid a positive price (otherwise spend away, I don’t care). But this increase in demand for the particular good must raise its price (as the government has to bid away the good from some other user). This is a real change in relative prices. If the economy is subject to any frictions at all, the multiplier must be less than one.

In this case of the current situation, the price effects are likely to be much larger than in the historical record. There is never enough slack in the economy to spend 3 percent of output and not see a difference. And if the CBO’s new estimates are anything close to accurate the crowding out is going to get a lot bigger before it gets any smaller.

Large Scale Macro Models

In her speech, Romer states the “policy multipliers [derived in large scale macro models] are surely more accurate than the simple calculations Barro suggests because big macro models try to take into account other factors driving output.” The suite of modern macro models used by and monetary policy influence output: the models are hard-wired to find beneficial effects of policy. This is an assumption in modeling not a result driven by data. Forecasting models without these features perform just as well in forecasting output but find no effect of policy.

Nevertheless, even taking the models as an accurate tool, Bernstein and Romer force the models to produce even greater effects of fiscal policy. I have already written on Bernstein and Romer’s misuse of the macro models (here). In summary, they force the Fed’s interest rate to remain at zero forever. And, since relative prices are all relatively fixed in these models, they force all prices to remain unchanged forever. Since there is no budget constraint in the models (they absorb differences in income in consumption in an external sector), they have essentially assumed that the government does not displace any other source of demand.

[By the way, whenever you hear someway say they have produced results from a large-scale macro model assuming no monetary policy response, they are effectively fixing prices in the model. So, if you hear these words, use caution in interpreting the results.]

A new paper by Cogan, Cwik, Taylor, and Wieland shows this argument using the same suite of models. Cogan et al assume that the Fed’s interest rate remains zero for two years and then responds optimally via a Taylor rule. They use exactly the same models used by Bernstein and Romer. This one change pushes the fiscal multiplier down to 1 in the first quarter (prices are sticky in these models) and pushes the multiplier all the way down to 0.2 after four quarters. Imagine if they were able to let all other prices in the model adjust at the same time. I am thinking the model would be close to 0.2 in the first quarter.

The Romer and Romer Results

Christina Romer’s work with David Romer uses a narrative style to discuss the likely effectiveness of fiscal policy through the historical experience with tax cuts. (I discuss the details of this paper here) In short, I find their results just as biased (in an omitted variable sense) as other studies in the literature.

I object to two characterizations she makes in reference to this work. First, she states that doing a narrative analysis for government spending would be difficult. I refer her to this post and this post. In the first post, I discuss the rise in defense expenditures in the late 1970s in the United States and in the second I discuss Japan’s experience. Combine these with Barro’s wartime narrative and I think we have done her work for her.

Second, she says “the usual relationship between tax and spending multipliers would be maintained. That is, measured correctly, I would expect eh spending multiplier to be larger than the tax multiplier.” But, this sentence comes right after she tells us that a one percent decrease in taxes boosts GDP by 3 percent. That means she actually believes the multiplier is greater than 3. I think we should be able to measure a multiplier of that size.

Summary

Nothing I have said is proof that I am right. I do, however, feel that the preponderance of evidence is tilting sharply in my favor. The current economic cycle should once and for all settle the debate on the efficacy of fiscal policy. If the economy is indeed currently in the midst of recovery or if it recovers over the next few months, I will take that as evidence in favor of stimulus; if, on the other hand, the economy continues to fester, I will take that as contrary evidence. I do not believe we can say, as Krugman has argued, that the response is too timid.

But that is for future research.

Modern Macro Policy: Does it accelerate recoveries?

David Bath in a comment to this post asked if the fiscal stimulus package is likely to accelerate the recovery. I don’t know the answer for sure. So, I did what I always do: I pulled historic data.

The United States in the 19th century was a haven of Laissez Faire economic policy. There was, most of the time, no central bank and there was no coordinated counter-cyclical fiscal policy. Event the automatic stabilizers, such as unemployment insurance, were virtually nonexistent prior to the 1930s.

I thought about comparing recoveries in the 19th century to recoveries in the 20th century. If macroeconomic policy is effective then I would expect to see faster and more robust recoveries in the 20th century. (You won’t believe me, but I really did expect to find this result.)

The picture below shows the acceleration of GDP growth following a recession. The bars represent the average growth rate over the four years after a downturn in industrial production divided by the average growth rate over the preceding four years. A number of 15 percent implies that growth was 15 percent faster in the four years after the recession. (The results do not change substantively if two or three year intervals are used instead.)
The first two bars use the Federal Reserve’s IP series aggregated to annual frequency. Over the entire series, 1921 to 2008, GDP grew 30 percent faster following a recession than in the years immediately preceding the slowdown. This number is driven by the 1930s. If the sample is shortened to examine only data from 1960 forward, the increase in growth is closer to 15 percent.

The second data series use available on the NBER website (here) and were compiled for Joseph H. Davis An Annual Index of U.S. Industrial Production, 1790-1915 Quarterly Journal of Economics (November 2004). This industrial production data set runs from 1790 to 1915. The average increase in growth was almost 20 percent over the entire sample and around 26 percent in the post Civil War sample.

I often use the NBER Total Physical Production series (here). The last column shows the results from this data set: around 15 percent.

Another way to cut the data is in the average length of recessions. The chart below shows the average duration, in years, of the each downturn in IP. There is startlingly little difference in average duration, less than 1/3 of one year over all of the different cuts at the data. Of course, this is annual data and I cannot necessarily tell the difference between a 13 month and a 23 month recession.
It’s amazing that in the era of modern macro policy recessions do not end in more robust growth than in the era before macro stabilization. Of course, this result could itself owe to successful stabilization policy. If downturns are less severe, the rebounds might be less robust. Nonetheless, the similarities across time raise questions in my mind.

Christina Romer: Tax Cuts and GDP Growth

I am posting out of order. This post is part of a longer piece discussing Christina Romer’s views on the efficacy of fiscal stimulus.

One of the difficult elements of studying fiscal influences on growth is the lack of counterfactuals and exogenous variables. Large changes to government spending or to taxes almost always occur for a reason and most of the time these reasons are cyclical: the changes are made with the explicitly intent of boosting output. [The big exception to this is spending and tax changes during wars. Krugman, Romer, and other Neo Keynesians wish to exclude war spending and taxation from the current analysis. I disagree but let’s play by their rules.]

In Romer’s words (see her speech here), all studies of fiscal stimulus are plagued by omitted variable bias. Therefore, she does not find it surprising that (almost) all previous empirical studies of relationship between fiscal policy (both spending and tax) have found very small multipliers. In her view, the omitted variables are biasing the results down. Of course, since they are both omitted and unknown they could also be biasing the results up.

To bypass the problem, Romer and Romer (2008) use the narrative record to divide large tax changes into exogenous and endogenous pieces. They use the Congressional record and Presidential speeches to separate tax policy actions taken for reasons related to economic activity and those taken for non-economic reasons. They have the right idea: if we can identify exogenous tax moves, and if there are a sufficient number of them, we can identify the impact of tax changes on economic growth.

Romer and Romer find that a tax cut equivalent to 1 percent of GDP will boost output by 3 percent over ten quarters. While I believe tax cuts can stimulate growth, I do not find their results at all convincing. The tax cuts they identify as exogenous are not: the majority of their tax cuts occur in the depth of recessions. Since economies tend to bounce out of recession very quickly, their model mixes the effects of tax cuts with growth recoveries. In particular, their inclusion of a series of tax cuts in 1981 and 1982 significantly bias their results.

While this paper is an interesting academic exercise, to use these results as the justification for fiscal stimulus is wrong.

The Details

Figure 1 on page 46 in the linked paper shows their time series for exogenous tax changes. The majority of tax changes identified as exogenous are tax cuts, apparently tax hikes are less likely to be exogenous. The largest tax cut occurs in 1947 and is almost 2 percent of GDP. Then, there are clusters of tax cuts beginning in 1964 and 1981. Each of these clusters contains 4 cuts and the cuts are made over about 2 years. The final two significant cuts occur under Bush and are implemented in 2001 and 2003.

Although I applaud their attempt and they do have a good methodology, the dates they have chosen do not seem random in an economic sense. Before we turn to a general look at their dates, let’s take a look at the last two cuts: 2001 and 2003. These I know a bit more about.

Independent of the reasons given in speeches at the time of their passage, these tax cuts were proposed and enacted to counter the 2001 recession. I realize Romer and Romer divide the multitude of tax cuts over this two year period into different categories, some exogenous and some endogenous; and, it is true that the Bush administration was ideologically pro-tax cut; nonetheless, these were all implemented to boost economic output. In the absence of the recession in 2001 and the weak labor market in 2003 these tax cuts would not have occurred.

This highlights the fundamental difficulty Romer faces: the content of speeches and the congressional record, do not always reveal the full story.

Take a look at the picture below. The picture shows GDP growth over different horizons surrounding the key tax cuts in the Romer-Romer data. The first bar in each set shows average GDP growth over the 8 quarters before the tax cut. The second bar shows GDP growth for the four quarters ending at the date of implementation. The third bar is GDP growth for the 8 quarters following the cut and the last two bars show average growth for the 10 years before and after, respectively.
The first thing to notice is that with the exception of 2003 and 1971 growth in the year prior to the tax cut was slower than growth on either side of the cut. That is, whether or not there is a recession, these tax cuts only seem to occur when economic growth is relatively slow. My interpretation of this finding (especially when combined with specific knowledge of the 2001 and 2003 cuts), is that the majority of the tax cuts in the post war period are not completely exogenous in the sense Romer and Romer need them to be.

In particular, take a look at the bars labeled 1981. Reagan, like Bush, was philosophically pro-tax cuts. He would likely have tried to implement tax cuts independent of the economic situation. Whatever the counterfactual, these tax cuts were made in the midst of the (at least until now) deepest downturn of the post-war era.

The four tax cuts which occurred in 1981 likely provide much of Romer and Romer’s identification. The four tax cuts were followed by a monumental acceleration in growth: from almost -2 percent to almost plus 5 percent. Their statistical model attributes all of this acceleration to the tax cuts, not to the fact that the economy was bouncing back from a recession.

So, while the tax cuts may have helped boost growth, the model attributes to much growth to the tax cuts and not enough to the cyclical state of the economy. In other words, Romer and Romer suffer from omitted variable bias just as other studies do. To use these numbers as the basis for fiscal policy is wrong.

Tuesday, March 17, 2009

Housing Permits: Good news or a problem with the Seasonal Factors?

February housing starts and permits bounced off their January lows. The news is positive: after 8 months of large declines, we finally get a bounce. And at least some analysts are calling for the bottom on starts. (See this post from Calculated Risk.) To me, it does not mean the end, but at least it is not bad news.

At least that’s what I thought this morning. However, this evening I delved a little deeper into the data. It seems the uptick in housing permits is an illusion caused by a larger than normal change in the seasonal adjustment factor.

Take a look at the picture below. It shows the seasonally adjusted permits data as released by the Census (the black line). This data shows a slowing and maybe even an inflection point near the end of 2008. (Even if this data were true, it would not signal the bottom – just take a look at early 2008 figures to see this.) However, the inflection and the rise in February owes entirely to this year’s seasonal factors. If instead, I use last year’s factors for December, January and February, I find a line that is still moving down (the blue extension).
Here is a picture of the seasonal factors themselves between 2005 and February 2009. Take a look at their sharply lower level (this boosts permits) and their odd shape relative to previous years. I am not a seasonal adjustment expert; but, aren’t seasonal factors supposed to be stable? Shouldn’t they adjust the monthly data in a consistent manner across time? Otherwise, aren’t they simply introducing noise into the seasonal series?
So, now I am at a loss on how to interpret the permits data. I was really thinking that the 10 percent rise in single-family permits might (just might) mean something. It was just one month but it was still better than nothing. Now, it seems we should not be too excited about the uptick.
Changes in the seasonal factors do not foretell good times ahead.

By the way, I am not sure I understand why the seasonal factors are so unstable. Is it because the downturn is throwing off the estimation procedure? If somebody has a good explanation of these factors, please post a comment or email me at secreteconomist@gmail.com.

Capacity Utilization: Bad times ahead for investment?

Industrial production fell 1.5 percent in February. The fall is bad news but compared with Japan’s 10 percent fall for January released last week, the number was almost heartening. In level terms, IP is now near its lows in the 2001 recession. (Again heartening compared with Japan whose IP is at its 1984 level – anybody remember Foot Loose or Time After Time?)
More interestingly, manufacturing capacity utilization, 67.4 percent, fell solidly below its previous post-war low. This is the largest fall in utilization rates in the series. This number is consistent with the large manufacturing job losses over the past three years.
Total capacity utilization was exactly tied with its previous post-war low of 70.9 percent, achieved in December 1982. This number is boosted by both utilities and mining, neither of which has fallen to any significant degree so far in this downturn. Both series likely benefit from a decline in relative capacity. Nonetheless, I expect mining capacity utilization to fall going forward, as low metals prices is likely to close an increasing number of mines.

That capacity utilization is so low is bad news. It means that between 10 and 15 percent of our factories are idle, relative to normal, expansionary utilization rates. To many (see for example this post at Calculated Risk), the low rate of capacity utilization implies ongoing contractions in business investment. The intuition is simple: if factories are under-utilized today, why would firms invest in new capacity going forward.
And it’s true, falling capacity utilization tends to predict investment contractions. The rate, however, seems to be a good indicator of bad times today and not an independent predictor of bad times in the future.

To see this, take a look at the previous post-war low for total capacity utilization: December 1982.
Despite the record low utilization rates, at the end of 1982, 1983 was the best year, in terms of investment growth on record. This statement is true for both equipment and software spending (shown above) and for nonresidential structures.

Does this mean I am calling the end of the recession? Of course not. I am only making the simple point that low capacity utilization is not inherently linked to low investment. We are still in the midst of the downward leg of this recession. The only good news I see is the occasional lack of bad news. For example, housing starts for February bounced off their January lows. This is good news relative to the decline most people expected; however, it does not signal the end of the housing correction. Inventories are still too high (though adjusting). The economy can’t recover while initial claims for unemployment remain in the 600s.

Friday, March 13, 2009

Larry Summers and Federal Reserve Independence

Warning: This post is speculative. It contains my thoughts and concerns only. It is not economic analysis.

Larry Summers, one of the President’s chief economic advisors, gave a speech at Brookings today. The speech was mostly exactly what one would expect: lots of talk about how the administration’s bail out is going to work and a few assertions that it might already be working. We could go through these but you would not find my views surprising.

Two paragraphs in the introduction to the speech, however, are of note. Here is the first:

Economic downturns historically are of two types. Most of those in post-World War II-America have been a by-product of the Federal Reserve’s efforts to control rising inflation. But an alternative source of recession comes from the spontaneous correction of financial excesses: the bursting of bubbles, de-leveraging in the financial sector, declining asset values, reduced demand, and reduced employment.
Two causes of recessions? The Fed and bubbles bursting. You do not have to be a real business cycle economist to think there might, just might, be other reasons. For example, Hamilton seems certain that at least some of the 70’s and 80’s downturns were a result of oil shocks.

Ignore the bubbles part. Why is he saying that the Fed has caused the majority of the post-war recessions? Notice, he says this without any of the usual hedge words or qualifiers. He states it categorically as if it is fact. This is not an accident. It is the written text of a speech and the written text of speeches of White House officials is vetted. Even if Summers truly believes the statement, why put it in the speech.

Before I go on, here is the second paragraph.

Our single most important priority is bringing about economic recovery and ensuring that the next economic expansion, unlike it’s predecessors, is fundamentally sound and not driven by financial excess.
“unlike it’s predecessors” The phrase jumps out. Is he saying that all previous economic expansions have been fundamentally unsound and driven by financial excess? All of them? Or does he just mean the economic expansion since say 1987 when Greenspan took over the Fed?

He is very carefully and almost in so many words saying that not only does the Fed cause recessions but the periods where the Fed has seemed to shepherd the economy along robust expansion paths, are actually the precursors to bubbles, making the Fed responsible for all recessions.

Here is what I fear. (Let me emphasize again: this is not analysis it is just my own personal thoughts on the matter.) These statements are intended to subtly begin the process of undermining the Fed. I have a hunch the administration would like a Fed with even more expansionist views on monetary policy. Perhaps one that is a bit less independent and a bit more willing to print.

I hope I am misreading the statements: I don’t think I am. I will be watching Summers’ speeches a bit more closely from now on. This is a dangerous path they are walking.

Alan Greenspan and Bad Statistics

In a Wall Street Journal editorial earlier this week, Alan Greenspan categorically denied any complicity on the part of the Fed in the run-up of housing prices between 2002 and 2006.

Accelerating the path of monetary tightening that the Fed pursued in 2004-2005
could not have "prevented" the housing bubble.
While the Fed may have failed in its role as prudential regulator and may, through that channel, have contributed, I completely agree with Greenspan: the monetary policy stance of the Fed was not at fault. Greenspan attributes the run up to home mortgage rates rather than to “easy money” policies on the part of the Fed.

There are at least two broad and competing explanations of the origins of this crisis. The first is that the "easy money" policies of the Federal Reserve produced the U.S. housing bubble that is at the core of today's financial mess.

The second, and far more credible, explanation agrees that it was indeed lower interest rates that spawned the speculative euphoria. However, the interest rate that mattered was not the federal-funds rate, but the rate on long-term, fixed-rate mortgages. Between 2002 and 2005, home mortgage rates led U.S. home price change by 11 months. This correlation between home prices and mortgage rates was highly significant, and a far better indicator of rising home prices than the fed-funds rate.
At first glance, Greenspan seems to be quibbling over subtle changes in the term structure. After all, the Fed implements monetary policy through control of the shortest end of the yield curve. Indeed, over at least part of this time period in question, the Fed was actively trying to influence the long end of the curve through its monetary policy statements: recall the language such as “measured pace” in the statements.

Greenspan, however, is clearly aware of these issues and instead sites a fall in the correlation between the Fed Funds rates and mortgage interest rates as an explanation. He states

The Federal Reserve became acutely aware of the disconnect between monetary policy and mortgage rates when the latter failed to respond as expected to the Fed tightening in mid-2004. Moreover, the data show that home mortgage rates had become gradually decoupled from monetary policy even earlier -- in the wake of the emergence, beginning around the turn of this century, of a well arbitraged global market for long-term debt instruments.

U.S. mortgage rates linkage to short-term U.S. rates had been close for decades. Between 1971 and 2002, the fed-funds rate and the mortgage rate moved in lockstep. The correlation between them was a tight 0.85. Between 2002 and 2005, however, the correlation diminished to insignificance.

The picture below shows the Fed Funds rate and the 30-year mortgage rate from 1974 through February 2009. On average, the two rates do indeed move closely together—arbitrage arguments limit the differences between the two rates. The long-term average hides frequent deviations in the two series. The spread between the two rates increased sharply in 1974, 1992, and 2002.
Greenspan uses the correlation in the two rates as his proxy for the efficacy of monetary policy. He notes that the correlation between the two series dropped sharply over the three-year period between 2002 and 2005. It’s true: the correlation was abnormally low.

But, the fall in the correlation was neither unique nor was it long-lasting. The picture below shows the three-year backward-looking correlation between the Fed Funds rate and the 30-year mortgage rate. The correlation bounced immediately back to its 2002 level and the average correlation between 2004 and 2009 was only a shade below the average from 1974 to 2002. The temporary change in the correlation does not appear to have been caused by the shift in global savings patterns—else it would have remained low. The correlation was actually at its lowest point in the late 1990s.
Moreover, Greenspan is careful, throughout his editorial to refer to long-term mortgage rates. He avoids mentioning ARMs in their entirety. Over this period, the number of ARMs issued was at an historic high. And, the fall in correlation, as shown in the picture below, is not evident between ARM rates and the Fed Funds rate. The increase in the Fed Funds rate seems to have pushed up the ARM rate.
Greenspan is famous for knowing data inside and out. He knows these numbers; I don’t know why he would misuse them. And, he is misusing statistics to make a case that need not be made. The Taylor rule is descriptive not proscriptive. Only a foolish central bank would implement policy with the rule. It is a useful guide for understanding central banks and their actions, nothing more.


Wednesday, March 11, 2009

Separation and Hires: The key to understanding labor force dynamics.

The JOTLS data (find the data here) produced by the BLS gives insight into the recent job losses. As Robert Shimer, a professor at the University of Chicago, showed some time ago, unemployment can go up either because workers become more likely to lose their jobs (the separation rate) or because unemployed workers have a more difficult time finding new jobs (the hires or matching rate). The BLS only began collecting data in late 2000, much too late for us to compare the current downturn to previous episodes. Bob Shimer, however, has computed separation and matching rates going back to 1947 (his data is here). The data is not strictly comparable but I think we can use the lessons from Shimer’s data and apply them to the current episode.

I have spent a lot of time working with his data lately. The cyclical behavior of matching and separation rates is remarkable and should provide the key to the next level of understanding in business cycle research. Matching rates, the probability of finding a job conditional on unemployment, begin to fall well before recessions begin and continue to fall well after the recession ends. Separation rates tend to rise at the beginning of recessions and tend to fall well before the end of the recession. Not surprisingly, the worst recessions in the post-war era (1958, 1982) are characterized by large changes in both rates.

In every post-war recession, the separation rate returned to more-or-less its long term average 4-to-monhts before the trough. The fall in separation rates also coincides with a rise in consumption. Apparently, consumption begins to rise once employed households no longer fear unemployment – a rational outcome. Consumption rises before unemployment falls because unemployed workers continue to have trouble finding work long after the recession ends.

As a result of this research, I am beginning to have more faith in the signals emitted by the JOLTS data. First, take a look at the picture below. The picture shows the number of hires each month in the JOLTS data from late 2000 to January 2009. Amazingly, the number of hires began to fall as early as January 2006, the same month the housing market turned South.
This is the clearest piece of data I have yet come across to indicate that the collapse of the housing market was not a random event. The decline in hire rates reduces the permanent income of households. People realize that conditional on losing their job, new work will be harder to find. Households also seem to know that this trend tends to have long cyclical properties – a decline in the series today is likely to signal a long period of increasingly lower matching rates.

Of course, I am looking for indications of turning points. I want to know when the economy is going to recover, in which case we need to look at the separation rate. The picture below is puzzling at first. It shows that the number of separations has been steadily falling since early 2007. This data alone would indicate that flows into unemployment should be falling, quite the opposite of what seems to be happening.

The problem with the data on total separations is that it does not control for the voluntary versus involuntary separation. If I quit my job today, knowing I had a new job in the bag, I would show up first as a separation then as a hire. But this type of turnover is not actually of interest. We care only about involuntary separation. To get the real picture, I subtract the number of monthly quits from the total. The resulting picture, shown below, gives a completely different view of the state of the labor market.
The level of separations in January 2009 was 28 percent higher than its 2001-07 average level. Keeping in mind that half of that time period was during bad labor markets, this statistic is quite stunning. More importantly, however, for those looking for a near-term recovery, the series was still rising in January. If the separation rate fell sharply in February, I would expect a recovery in mid-2009. It’s possible, but we don’t see any evidence of that yet. If anything, the data indicate a worsening in the separation rate: Through the first week of March, initial claims for unemployment insurance were still rising.

Casey Mulligan, another one of those Chicago economists, notes in his blog that consumer spending is falling even in the face of rising disposable income. He attributes the fall to the sharp fall in asset values. I believe in wealth effects but most estimates are actually quite small. So, while I agree with his assessment and think the change in asset prices is playing a role, I believe the decline in consumption can be more directly attributed to changes in the labor market.
Even as current income continues to rise, the high separation and low matching rates have sharply reduced permanent income for households – they are faced with greater probability of job loss and lower odds of getting a new job if they do lose their job. And, labor income is far and away the largest portion of permanent income for the vast majority of Americans.

Friday, March 6, 2009

The Employment Report: Will the bad news never end?

Today’s employment report was bad, although not surprising, news. Job losses across sectors continued as they have for the past several months. The only mild surprise in the report was a drop in the number of people self reporting out of the labor force. This combined with the relatively minor losses in the household survey might give us an inkling of hope for the bottom. It’s too soon to draw any conclusion but if that continued for a couple more months I might get optimistic.

One of the key features of the employment report is the ongoing downward revisions to previous month’s reports. I took the time to download the real time data from the Philly Fed. The dashed line shows the number of job losses reported with the initial report. The sold line shows the monthly job loss as reported in the March 6th employment report. These revisions are unusual. The BLS’s methods are quite good and to get systematic revisions are unusual.

In particular, these revisions combined with the benchmark revision reported last month, have substantially lowered the level of December employment. Let me give one statistic in particular (and I only bring this statistic up because I was right). On December 11, I wrote a post predicting a loss of 933,900 jobs in December. Well, and this is completely meaningless, the difference between level of November employment as I knew it then and the level of December employment as we know it now is 938,000. Not bad for a Chicago-trained economist. Enough of that! Just keep in mind that we are still one benchmark away from knowing the job losses in the last 9 months of 2008. I suspect we will have another big round of markdowns to sort through next February.
Back to the employment report and its implications for the state of the economy. The next figure shows the level of manufacturing employment from 1939 to February 2009. In level terms, manufacturing employment is now below its post-war low (One exception, February 1946.) This is not the result of a long-term secular trend. The level of manufacturing employment rose, on average from 1939 to the late 1960. It was then reasonably stable from the 60s through 2000. (If I fit a line, I find the slightest downward trend.)

In 2000, the world changed. Between January 2000 and January 2004, the United States lost 20 percent of its manufacturing labor force and it never got it back. As of this report, we have lost almost 30 percent of our manufacturing workforce since January 2000. I hope these jobs come back but at the moment it does not look promising.
Of course, manufacturing as a share of employment has fallen steadily since the end of WWII. It is now the service sector that dominates the U.S. economy. The share of the service sector has risen steadily from around 60 percent at the end of the war to 85 percent today. This trend does not have to be bad for the United States. Many economists believe that it is the natural progression of economies: from agriculture to manufacturing to services.

In particular, the service sector has proven to be a source of stability over time. Service-sector job losses have tended to be much smaller than manufacturing losses. (We can argue about deviation from trend but I only care about outright losses at the moment.) Unfortunately, this recession has not confined itself to the manufacturing sector.

Take a look at the picture below. Job losses in the service sector are staggering. The service sector accounts for about half of the job losses to date, 2.1 million jobs in the last twelve months. In percentage terms, the twelve-month loss of jobs is 30 percent higher than the next biggest loss (recorded in 1949).

The cumulative job losses in this recession are stunning. That the losses are accelerating is worrisome. These job losses will push house prices (and other asset prices) down even further and push ever more households into foreclosure. More bad debt will put extra pressure on bank balance sheets. Bad bank balance sheets … It's Friday; let’s just call it a week.

Wednesday, March 4, 2009

Is it really a financial crisis?

This particular downturn in economic activity has become known as The Financial Crisis. It’s called that because most people—economists, policy makers, and commentators alike—took very little notice of the slowdown in economic activity until mid-to-late summer 2007. In its Monetary Policy Report to Congress in July 2007, the Federal Reserve confidently states, “the U.S. economy appears likely to expand at a moderate pace over the second half of 2007, with growth then strengthening a bit in 2008 to a rate close to the economy’s underlying trend.” This is Central-Bank-speak for “all is well with the world.”

By August 17, 2007, merely a month later, the Fed’s tone had changed. In the press release following the August FOMC meeting, the Fed stated “Financial market conditions have deteriorated, and tighter credit conditions and increased uncertainty have the potential to restrain economic growth going forward.” The Fed was apparently blindsided by the crisis.

We can weave a similar timing sequence for the financial stress that occurred in September 2008. In the summer of 2008, the Fed, once again, felt that the economic situation was improving. The takeover of Bear Stearns was far behind them and many indicators of financial-market stress were easing. In September, the world was jolted awake by the takeover of Fannie and Freddie and the ensuing failure of Lehmen. Many policy makers attribute the rapid fourth-quarter economic deterioration to the failure of Lehman. A growing chorus (see this WSJ editorial) attributes the deterioration to the takeover of Fannie Mae and Freddie Mac.

I will restate the timing. Deterioration in the performance of U.S. subprime mortgages came to a head in August 2007, initiating the financial crisis. In September 2008, the failure of Fannie Mae, Freddie Mac, and Lehman sent a financial shock through an already fragile system causing a rapid decline in economic activity. The shock runs from the financial sector to the real sector and maybe back again.

With at least the perspective of hindsight, however, the timing and location of changes in economic activity belie the financial crisis story.

Housing led this crisis so let’s look at the housing market first. The picture below shows housing permits for the United States. The series shows a clear peak in January 2006. This decline is too early to have been caused by the financial crisis. At that time, most people were still unclear on the difference between subprime and prime mortgages and subprime originations were proceeding apace. The default rate on both classes of loans was low and stable.

Housing markets do not simply turn. They respond to the economic environment. Residential investment strongly leads the cycle. That permits turned south so early indicates that some shift in the economy was already apparent to the population if not to economists.

Further evidence of the shift in the economic environment can be gleaned from the yield curve. January 2006 is also the month that the yield curve inverted. Inversions of the yield curve, on both empirical and theoretical lines, predict recessions.

Of course, the Federal Reserve, and most forecasters, did not believe that either the decline in permits or the inversion of the yield curve were a cause of concern. Federal Reserve staff at the time published papers using econometric models to dismiss inversions of the yield curve as a statistical anomaly rather than a predictor of recessions. (See this paper by Jonathan Wright.) This was a mistake. The theory is quite clear and I trust theory before econometric models.

The Fed also remained convinced that housing only interacted with the rest of the economy through house prices, and house prices were still rising. They still seem to not understand the concept that housing is a long-lived durable good. As such, the behavior of housing investment should give a very accurate read on household’s views on near-term growth prospects.

Of course, we still don’t know what the underlying shock that hit the households sector, and thereby moving the bond and housing markets. I can speculate, however.

The following picture shows the level of hourly real wages between 2000 and 2008. Real wages began to fall in early 2004. Perhaps households in 2004 and 2005 continued to believe the forecasts of economists, that the economy would grow at a robust pace for the foreseeable future. And, perhaps these households believed they would share in this prosperity. Then the households likely borrowed assuming they would be able to afford their payment stream with their rising wages. When rising energy prices made this assumption wrong, the most fragile, the subprime household, began to default. This would be an oil shock. I don’t know. I am just speculating.
I find this convincing but there is more. The story of the financial crisis has the shock running from the United States to Europe beginning in the summer of 2007. However, as is shown in the picture below, housing permits in the United States and in the European Union turned South at exactly the same time, January 2006. Remember, this is way too early for the financial shock to have been transmitting U.S. subprime problems across the Atlantic.

What’s more the coincidental timing of the downturn does not just rest with housing; it extends to other important macro series as well.

Real retail sales, shown in the picture below, also turned soft with similar timing in both economies. In both economies, real retail sales were both growing solidly when they hit a wall in June 2007. At that time, both series turned flat and did not grow on average for the next year. June 2007 predates the initial wave of financial turmoil. In May 2008, U.S. retail sales turned sour, while EU sales stayed flat. I don’t know why but perhaps the foreclosure crisis in the United States or perhaps the weaker social safety net was starting to show through. In any case, May 2008 predates the increase in turmoil which began in September.
Industrial production, shown next, in the United States and the European Union has not historical moved at the same pace, although business cycles have been somewhat synchronized. In this episode, however, the downturn in IP happens at exactly the same time in the two economies. December 2007 is the peak date in both places. This data is after the initial increase in financial turmoil but the coincident timing still points to a common shock.

In August 2008, IP started to collapse in earnest. August 2008 predates Lehman. August 2008 predates Fannie and Freddie. Indeed, as you can see from the pictures in this post, IP turned South in most economies before September 2008.
It seems amazingly obvious to me that there was a real economic-based reason that all of these financial companies came under stress in September 2008. The economy was weakening and the value of their portfolios was dropping. Although none of us knew enough to price them ourselves, the magic of market was at work and collectively we knew. (By the way, that’s how it’s supposed to work. I am supposed to be able to infer the state of the economy from asset prices using the market as an information aggregator.)

Why has the Fed with all of its resources come to a different conclusion? In September 2008, nobody knew that IP was collapsing. At the time, the latest “real” data on the economy was for July and everything looked fine. The Fed (and many, many others) jumped to the conclusion that the financial turmoil was occurring independent of any disruption in the macro economy: a classic Panic. The low market value of the assets on the institutions balance sheets must reflect fire-sale prices rather than economic fundamentals.

I don’t know the true nature of the original shock. But I do know that that original shock does not appear to have been a financial shock. The timing and global coordination of the downturn do not support a financial shock alone.

We are still debating the cause of the Great Depression, and however this episode turns out, we will be debating its causes for a long time as well.

The Unemployment Rate: A Poor Economic Indicator

People seem curious about the unemployment rate. Among the most common questions I am asked is “How high do you think the unemployment rate will get?” Most askers seem obsessed with the 10 percent barrier.

They are asking the wrong question.

The unemployment rate is calculated from the household survey. It is the number of unemployed persons divided by the labor force. Simple enough and an easy standby measure.

The problem is, of course, in the definition of an unemployed person. To be unemployed, one must be out of work and actively seeking employment. If you are not actively seeking employment, then you are out of the labor force. The problem is that many people give up looking when there are no jobs to be had (a logical course of action, but statistically problematic).
Think of a mill town. In a typical mill town, most of the employment is related to the operation of the mill. The mill workers are directly employed, but even the grocer is simply there to provide groceries to mill workers. When the mill shuts down, there are no jobs. So what answer does a household from this mill town give, if surveyed by the BLS. Most likely, they respond that they are not in the labor force. They would work if the mill were open but it is not.

This is a wide-spread and typical example. Although not every discouraged is from a mill town, the principle is widespread. When the unemployment rises, this rise is likely associated with a large number of discouraged workers. Hence, the increase in the unemployment rate is muted. The more people out of work; the greater the effect.

Take a look at the picture below. The bottom line is the unemployment rate as measured by the BLS. In the top line, I have added to the pool of unemployed persons the change, over the previous two years, of the persons out of the labor force. This swath is too broad, but without delving into microdata I cannot do systematically better.

Notice, the Adjusted Unemployment Rate (AUR) is not always above the Unemployment Rate (UR). In 1997, when people were reentering the workforce to join the tech boom, the two rates coincided. Then in the 2000 recession there is a large divergence as discouraged workers leave the workforce.

Notice, these workers did not permanently leave the workforce. As soon as labor market conditions improved in 2004, they rejoined the workforce en masse (this can be seen by the narrowing gap between the two lines). And, by the summer of 2006, the two rates almost coincided once again. These workers should have been included in the unemployment rate. (By the way, I have long suspected, but never examined, that one of the reasons European unemployment rates are higher than in the U.S. is a differential treatment of out of the labor force individuals.)
Using, the adjusted unemployment rate, we are already approaching 10 percent, a two percentage point gap over the unadjusted rate. When the economy reaches 10 percent and above in terms of the reported unemployment rate, the gap may widen sharply as it did during the 1970 recession or the gap may narrow (as UR rises faster) as it did in 1974. It is a horse race between a falling workforce (driven by discourage workers) and the unemployed. Either could win the race. The UR could move dramatically (1974) or level out at a low level (1970, 2001).

Do not focus on the unemployment rate. If you want to know the state of the labor market, either look at initial claims for unemployment insurance or the total number of jobs lost. Don’t look for a short cut summary statistic.

Monday, March 2, 2009

GDII or Just Another Recession?

Do you remember when WWI was called the Great War or the War to End All Wars? Probably not unless you were of age before 1940, which makes you 90 something now, an unlikely demographic to be browsing the web. I know the terms thanks to Colonel Potter and his fond memories of the even-then outdated cavalry charge.

Most macroeconomists believe that the Great Depression was the depression to end all depressions. We learned a number of policy lessons during the Great Depression. With these lessons in hand and our ever expanding understanding of the global economy, depressions were a thing of the past. Indeed, a large group of economists began to believe that severe recessions themselves were a thing of the past. They believed that monetary policy had become so sophisticated in managing expectations that the normal cyclical swings in economic activity could be almost completely avoided. In papers published a recently as January (this paper also contains a good overview of the moderation literature), economists continued this debate.

I don’t know all of the answers, but I do know that if this recession ended today, if the global economy returned to 3½ percent growth rates, the certitude over the efficacy of policy should be over. With Canadian GDP data now in hand (-3.4 percent), the fall in fourth-quarter global GDP was a post-war record. This recession happened under the watch of the new and improved macroeconomic policy.

And as I have said before, there is a growing risk that we are now entering a period of global depression. That depression is imminent is not, of course, certain: We may escape with a deep and prolonged recession.

But, depressions are not that uncommon. In four episodes in the latter half of the 19th century, manufacturing output in the United States fell more than 20 percent. By my definition, four depressions. In the first half of the 20th century, manufacturing output met this definition 6 times, four if we don’t count 1938 as separate from 1930-32 and if we don’t include the draw down following WWII. Between 1950 and 2008, there were exactly zero episodes. In their broadest interpretation, those papers I referenced above on the Great Moderation are really discussing this phenomenon, although they believe they are discussing the post-1980 world.

Why do we believe depressions cannot happen any longer? Because they haven’t happened lately? Because economic policy can solve all of our problems? Because the world is different?

A lot of my concerns have been driven by the collapse of the Asian economies. Across Asia, industrial production has fallen more than 30 percent. The decline in production has been mirrored by falling trade volumes and is now showing through to domestic consumption. In most countries, production has now fallen farther than it did during the Asian Financial Crisis. But Asia is not falling alone. Every major economy (with the previously discussed exception of China), is experiencing a major fall in manufacturing IP. Countries as diverse as Brazil, Poland, and South Africa are all contracting.

Take a look at the following pictures:

Industrial production in the United States reached a peak in December 2007. In August 2008, the series began its nosedive, six months of not just falls but historically large falls. For comparison, look at the fall in IP during the 2000 recession. That recession was a manufacturing recession, recall that 20 percent of manufacturing employment disappeared forever at that time.
But the fall in production in the United States is small relative to the declines currently occurring in Asia. In Taiwan, production peaked in April 2008 and has since fallen 38 percent. Taiwan’s economy is completely dependent on its manufacturing sector.
Production in Europe has fallen roughly the same as production in the United States, down just over 10 percent from the peak. The decline is not confined to countries like Ireland or Spain that had experienced fast growth in recent years, both Germany and France are seeing dropping production levels.

Of course, with Europe in recession, Eastern Europe could not be expected to thrive. But many people had high hopes that Poland with its relatively good external balance might do well.
Likewise, the downturn in Brazil is sudden and startling. Brazil is a relatively closed economy. For a developing country, its growth was supported to a large extent by an expansion of domestic demand rather than by exports. Brazil’s IP peaked in September and has since fallen 20 percent.
Even South Africa has not escaped the downturn. South African IP peaked in June 2008 and has since fallen 14 percent.
The falls in production look like nothing less than a global collapse in manufacturing. Of course, manufacturing has become a smaller and smaller share of the economy over the past 30 years. So, a decline in manufacturing does not have to correspond to a decline in overall output. Wait! Yes it does, at least when the declines are this large.

Let’s think about the other sectors of the economy that might support growth. The services sector, in the advanced economies, is the first candidate. But which parts of services can grow. The financial sector is in disarray. Retail and wholesale services are about distributing manufacturing goods. Transportation services are primarily (thinking shipping and railroads) designed to move goods and manufacturing inputs, especially coal and oil for energy hungry manufacturing plants. Agriculture production should continue to grow, people still need to eat. But, it could also shrink as people move to cheaper sources of food. I can’t think of any combination of sectors that can support growth in the absence of a manufacturing sector.

We shall see. Nobody thought there would be another world war after the Great War was over.

My Advice to Governments: Give up on fiscal stimulus. The stimulus is unlikely to help and will place country balance sheets in an untenable position if the economy continues to deteriorate. Instead, governments should look to the solvency of their social support programs.

For example, in the United States foreclosure is already a significant problem. House prices are likely to fall a lot further as the economy deteriorates over the next year. Low house prices combined with high unemployment will induce a large number of additional foreclosures. At some point, these foreclosures will cause a rise in the number of homeless families. The time to plan for this contingency is now, not when it actually happens.