Wednesday, December 2, 2009

Low Housing Inventories? Foreclosures continue to drive the market.

The inventory of new homes is in the ballpark of a new record low.  Almost one year ago, I posted on the topic of housing inventories (here).  Since then, the inventory of new homes has decreased to the lower end of my estimates, falling through my floor of 250 thousand; still the fall in inventories shows no sign of abating.  The pace of inventory drawdown has remained between 2.5 and 4.5 percent per month all year.  In every other recession, the end of the inventory cycle has been marked by gentle U-shape.  If true this time as well, the end of the housing correction is still far into the future. 


Why are inventories falling?  They are falling because the total inventory of housing effectively on the market is much larger than in a normal downturn.  Inventory is pushed up by the stock of foreclosed homes. 

Foreclosed homes directly compete with new homes, and they add to inventory in exactly the same manner.  A new home is empty and must be sold to maintain its value.  A foreclosed home is (generally) empty and must be sold to maintain its value.

In my view, the scale of the foreclosure problem is only beginning show.  The following chart shows the inventory of new homes, the solid blue line.  The inventory has declined from just shy of 600 thousand homes in early 2006 to almost 200 thousand in October.  Foreclosures swamp these numbers.

Fannie and Freddie alone have a stock of foreclosed houses on their books of over 100 thousand.  The red diamond in the chart shows inventories adjusted for these homes.  This addition moves the stock of unsold homes above the long-term average of the series. 




But these homes are just the tip of the iceberg.  The current number of foreclosed houses is much larger than those on the books of the two GSEs.  Even more importantly, at present, another 11 million households are in a negative equity position.  If even 10 percent of these households go into foreclosure, the true level of inventories is over 1.4 million, an inventory stock 2.5 times the previous record high. 

Since the foreclosed inventory cannot be worked off through starts alone, the adjustment will have to come through prices.  House prices are going to have to fall further.  They have to get cheap enough to clear the market.  The fall in prices will put more households under water; some of these houses will go into foreclosure, keeping inventories elevated. 




Also of note, median months for sale has continued to rise.  The graph below shows data from 1988.  Two prior recessions are shown and in neither of those recessions did the time for sale even budge.  The median new home on the market has now been for sale for 13.5 months, an increase of 4.4 months in the last year.  Most likely, this rise reflects blocks of houses that cannot be sold or at least cannot be sold at anything near the current price. 


Friday, November 20, 2009

America’s Lost Decade Already Happened

Once again, the United States faces the prospect of a jobless recovery.  Next month, the President is holding a conference on job creation.  The President seems genuinely distressed over the jobs situation.  He is seeking real answers.  He is asking the wrong questions. 

The President and his advisers view the job losses as a temporary cyclical problem, not as a structural problem with the U.S. economy.  Quick fixes for temporary downturns in the labor market are expensive but easy.  Last year (here,  scroll about half way down), I suggested a temporary social security tax holiday to boost employment and output.  Last week, Alan Blinder took up the call in an editorial in the Wall Street Journal.  At a cost of about $100,000 per job (I am not as optimistic as Blinder), the administration can create several million temporary jobs.  For far less money, the government could simply hire the same people at an average price of less than $40,000. 

Ultimately all of these programs are doomed to failure.  Quick fixes cannot solve structural problems, and a structural problem exists.  Before a solution can be devised, we must understand the source of the problem.  As of right now, the source is unknown and finding the source probably requires more data than is readily available on the structure of the job losses.  But we can make a start with the data in hand. 

In this post, I break out some of the key data, documenting the sectors of decline and the sectors of growth.  In my next post, I will try to explain the loss.  I will not succeed but perhaps we can get closer to an answer. 

The Problem

Between January 2000 and October 2009, employment in the United States rose by a paltry 67,000.  Including the already announced benchmark revision, the United States lost 757,000 jobs.  This loss was the first over a ten-year period since the Great Depression.  The post-war, record-low, ten-year job growth was 5.7 million achieved in December 1962. 

The job gap (the difference between the growth of the working age population and the growth in jobs) now exceeds 16 million.  The job gap in 1962 was 3.5 million jobs.  As a percent of employment, the current gap is 12 percent, the 1962 gap 6 percent, and the maximum gap during the Great Depression 19 percent.

And, the job gap is not merely an artifact of the current downturn.  At the peak of the labor market in December 2007, the job gap had already grown to more than 7 million jobs.  The economy was producing enough jobs to eventually overcome the difference, easing concerns into complacency, but the gap was still large.  The current downturn has simply exacerbated and made obvious an already existing problem. 

The current gap will almost certainly still exist when my fifth-grade daughter enters the labor force.  If we create 181,000 jobs per month (the average monthly job growth in the 1990s), the gap would take more than 12 years to close, given current projections of the growth of the overall workforce.  Even if job growth could be sustained at the maximum annual pace observed between 1940 and 2009 (392 thousand jobs per month), the jobs gap would persist for more than five years. 

Aren’t the losses “just” manufacturing jobs?

Over the past decade, job losses have been concentrated in manufacturing.  The goods-producing sector shed more than 6.3 million jobs as the Services sector added a roughly similar number.  In the middle of the last decade, many economists favored the idea that the loss of manufacturing jobs was a result of rapid productivity gains in the services sector.  They favored the idea that the loss of manufacturing jobs was part of the long evolution of modern economies:  from agriculture to manufacturing, from manufacturing to services.  The job losses were nothing more than a continuation of a long-term trend. 



The basic fact is verifiable.  As a share of output or employment, the services sector has outpaced the manufacturing sector since the early 1950s.  A shift to capital intensive technology in the manufacturing sector and rapid productivity gains in the services sector pushed and pulled workers.  The services sector was pulling labor out of the manufacturing sector, offering higher wages and easier work.  Between 1950 and 2000, the United States became an economic superpower and no one may question the relative change in our standard of living.  We survived the manufacturing losses and became stronger because of them. 

But, in the last decade, something changed:  manufacturing pushed workers out faster than the services sector could pull them in.  See, even though the share of manufacturing workers was falling, in absolute terms it was stable.  The level of manufacturing employment was approximately equal in 1950 and 2000. 

Job losses in manufacturing have been widespread.  Every major sub-category of manufacturing employment has declined.  Transportation, metals, computers, machinery, textiles, apparel, plastics, and printing have each lost more than 300,000 workers.  (I was amazed by the textiles figures.  When I went to NC State back in 1995, the textile industry was already rumored dead.  How can a dead industry still be shedding jobs 15 years later?)

Manufacturing is sick:  No surprise. 

Won’t Services save the day?

Maybe.  The services sector added almost 6.4 million jobs over the last decade.  The possibility that we remain in a period of rapid transformation exists.  It could be true, but the data stacks against the hypothesis. 

As manufacturing losses are broad, services gains are narrow.  Three sectors account for more than 100% of the service-sector gains.  The following chart shows the change in employment by major sector between January 2000 and October 2009.  Health led the charge.  The health services industry added more than 3.5 million jobs.  Part of this gain was funded by an expansion of Medicaid, but the gains look good.  Health is a growth sector. 


Unfortunately, the rest of the story is not as rosy.  Government employment accounted for one-third of the gains, 1.8 million people.  Worse, already in 2000, the government was the largest service-sector employer.  Government jobs do not promote growth.  Government jobs must be funded from other sectors.  Government jobs seldom innovate.  


Are the losses simply a symptom of an emerging new economy?

In the 1990s, the innovative services sector pulled workers in, but it did not pull just any worker.  Young workers are more flexible and adapt to new industries quicker than old workers.  They more flexible because they have less to lose in leaving the old industries; they have not yet built a stock of industry-specific capital.  The dynamic led to an increase in participation for workers younger than 45 and a decrease in those who were older.

Over the past decade the trend has reversed and it has reversed to a shocking extent.  The following graph shows the participation rate of workers by age group.  In all groups younger than 55, the participation rate has fallen.  In all groups over 55, the participation rate has risen. 



In 2003, the social security retirement age increased by a year, and that change may explain a bit of the shift.  But the change does not explain the increase in participation of the 70+ crowd.  The increase in 75+ almost doubles the participation rate of the very old.  The increase is not driven by better health, ten years is too short to effect that change. 

No, the data indicates two things:  older workers feel poorer and are working longer, and young people can’t get jobs.  (I am including everyone below 55 in the young category.  I will let you know when I need to change the definition again.)  I am surprised by the breakdown.  I had thought the picture would tilt the other direction, at least for prime-age workers (25-45). 

Push and Pull

The manufacturing sector pushes workers out whenever its productivity growth is sufficiently high.  The symptom of this push is rising output with or without gains in employment.  Since the 1950s, manufacturing output has increased steadily despite stagnant employment.  Beginning in 2000, this stopped being true.  Even the surge between 2004 and 2006 was mild.  The maximum growth rate of manufacturing output between 2001 and 2009 was lower than its average growth rate between 1992 and 2000. 

Likewise, the services sector pulled workers with high rates of productivity growth.  And, these high rates persisted over the last decade, but the average growth rate was a full percentage point lower in the last decade relative to the 1990s.  The services sector is not growing fast enough on average to offset manufacturing losses. 

Takeaways

The more I look at the data, the more I am convinced that the current downturn and the downturn in 2000 are related episodes.  At least from the labor-market perspective, they seem closely connected.  Treating the current job losses in isolation is a mistake.  Whatever forces devastated the labor market in the early 2000s remain in effect today. 

The President wants to solve the “jobs problem”, of this I am certain.  We are lucky to have a President who genuinely wants to solve the problem.  But his advisers do not seem aware of the structural issues.  The first step is admitting the problem.  Economists, especially those named Christina Romer or Larry Summers, need to stop thinking of the job losses as an unavoidable bad outcome associated with all recessions.  This time really is different.

Monday, November 16, 2009

Resolving the Foreclosure Crisis: What Can We Do?

Almost two years have elapsed since the beginning of the recession, but the foreclosure crisis continues.  At the end of the second quarter, residential mortgages held by commercial banks reached a new record default rate of nearly 9 percent, and according to recent news reports, this number continued to rise through the third quarter.  The high default rates do not owe to subprime borrowers alone.  The majority of delinquencies are no longer subprime mortgages.  That honor now belongs to prime and near-prime mortgages. 
The crisis has continued despite innumerable programs, at both the state and federal level, to alleviate the crisis.  The FHA has introduced a string of programs to help homeowners refinance into more affordable loans.  Together, various supervisory authorities have put pressure on banks and lending companies to modify the terms of residential mortgages—more than 1.5 million mortgages have been modified in the last year.  Several states have tried temporary foreclosure moratoriums.  In two indirect efforts to reduce foreclosures, the Federal Reserve purchased close to $800 billion in mortgage-backed securities and Congress passed an $8,000 new home-buyer tax credit. 
None of these programs has worked because none attack the source of the problem.  Households owe more on their mortgage than their home is worth.  They have economic incentives to default. 
Background
The mortgage crisis started because too many households borrowed too much and bought houses they could not afford.  Real money flowed into the housing market and residential investment increased. Because we couldn’t build enough, especially in urban areas, the price of houses rose. In this sense, we had borrowing bubble not a housing bubble.  While subprime mortgages are the poster children of the borrowing bubble, prime mortgages also flowed freely.
The borrowing bubble has burst. The money that flowed into the housing market is gone. There are now too many houses, and house prices have to fall.
This adjustment process is well under way. Housing starts have fallen off a cliff, and house prices are falling. Prices are anywhere between 5 and 40 percent below their peak depending on where you live and which measure you believe. They are going to be lower.
Falling prices are the root cause of mortgage foreclosures.  The more prices fall, the more households are under water (the value of the mortgage exceeds the value of the home).  Under water households are more likely than any other class of borrower to default.  They may continue making payments for a time, but the incentive to make payments is diminished.  If their house price falls more or if they lose even a little bit of income, they are likely to default.  From the household’s perspective, this default can be optimal. 
The relationship between being under water and defaulting is simple and direct.  Think of a household that has lost its income.  Households that have sufficient equity in their homes will always, under this circumstance, choose to sell the home (even at a loss) rather than undergoing foreclosure.  By selling, these households profit financially and socially—they go forward with their credit unblemished.  Without sufficient equity, they do not have this option.  Without income they cannot make payments.  But, they also cannot sell because they owe more than the home is worth.  Default is the only option.
The Plan
Any successful foreclosure-reduction plan must address under-water households first.  Reducing a household’s monthly mortgage payment reduces the probability of default but only slightly (see this post).  For most under-water households, a lower interest rate or an extended term does not change the default incentives. 
My plan is simple:  a voluntary program to purchase all mortgages with a loan-to-value ratio greater than 90 percent and issue a new mortgage with a 90 percent LTV to the household.  To reduce moral hazard and capricious program take up, issue an equity claim with the new mortgage.  Optimally, this claim amounts to 30 percent of the difference between the ultimate selling price and the origination value of the mortgage. 
The devil is in the details, but this plan would solve the foreclosure crisis. 
A Few Details
Why 90 percent?  This ratio gives homeowners an immediate financial stake in their property. A household with 10 percent equity does not have a financial incentive to default.  Even after paying closing costs and paying the equity claim (see below), selling is more profitable than defaulting.  A few households will still default, households make mistakes, but a few defaults are not a problem.


Why issue an equity claim?  The equity claim reduces the redistributive aspects of the program and reduces moral hazard.  As with any government intervention in markets, this plan is a transfer between households.  Responsible households are subsidizing the houses of those who over borrowed.  The program also encourages future households to borrow more in the hope they will receive a bailout if things go bad. Issuing an equity claim reduces both distortions.




Why 30 percent?  The equity claim recovers 30 percent of the difference between the origination value of the new mortgage and the eventual selling price of the home. With this percentage, the household stands to gain about 1 percent of the value of his home after closing costs at the time of origination.  That is, even at origination, the household has an incentive not to re-default.  If the claim were larger, households would remain under water, and the program would not meet with success.  Moreover, with a 70 percent stake in their property, homeowners have incentives to maintain their house.  


The equity claim, which is worth 3 percent of the home’s value at origination, also reduces take up of the program and provides an incentive for households in the program to increase the declared value of their property, thereby taking on a larger mortgage.  No one who does not need this program will voluntarily forfeit 3 percent of their home’s value.  


How do we determine home values?  The most difficult part of this plan is determining the house value.  But this is a macro program—macroeconomic policy is designed to care for the health of the economy not individualsthe program only has to be correct on average.  The current value of the home can be estimated by using the price of the home when it was last sold combined with the average change in house prices for the metropolitan area as measured by any good house price index.  The program should permit the household to increase but not decrease the current declared value.


Because no price index is perfect, especially when volumes are low, the government should reevaluate home values periodically, say quarterly.  If homeowners cannot sell their houses at the declared value, the declared value is too high and must be lowered.  If take up rates are high and homes are selling easily, the value may be too low.

Who would own the mortgage?  The mortgages would be held, initially, by the government.  But these notes do not have to be held.  Since the mortgages are now above water and since the government is committed to repeating the plan if values should fall, the mortgages could be sold to private investors.  If the plan is followed in its entirety, the mortgages would not even need a government guarantee. 

How much would the plan cost?  The cost of the program depends on take up rates, but it need not be expensive.  Outstanding mortgage debt increased almost $4 trillion between 2005Q1 and 2008Q2.  The average LTV for these mortgages was far less than 80 percent.  Most of these households remain above water; indeed, the best guess is that 3 out of 4 of these households remain above water.  With a 100 percent take-up rate amongst under-water households, the cost of the program would be roughly $100 billion.  Most likely, the final bill would fall between $50 and $100 billion.

Takeaways
This is the simplest plan for resolving the foreclosure crisis.  It requires little private information, and the government does not need to make an affordability determination.  The household’s income does not matter.  If the household can afford their payments, they will make them.  If not, they will sell, making a small profit. 

The plan has the added advantage of minimizing forward-looking housing market distortions.  Households can freely sell their house, and housing market adjustment is not hindered by negative equity households.  Labor market adjustment can also proceed.  An under-water manufacturing worker in Michigan is no longer forced to look for work in his local labor market alone.  He is free to sell his house and conduct a national search. 

My plan is scalable and can easily incorporate the good elements of other plans.  For example, with ongoing rising job losses, in some areas, too many houses could flood the market, hindering adjustment.  In this case, a temporary payment holiday for high LTV households or even an outright foreclosure moratorium could be combined with the mortgage purchase plan, effectively metering the flow of houses onto the market.  

No matter the bells and whistles, a successful program must ensure that households remain above water.

Households must have a stake in their property or they will default—at some point.

Wednesday, November 11, 2009

Separations and Hires: Has the Recovery Stalled?

This post revises and updates my views from this post last March. 

The JOTLS data (find the data here) produced by the BLS gives the best insight into the current state of the job market. As Robert Shimer, a professor at the University of Chicago, showed some time ago, unemployment can go up either because workers become more likely to lose their jobs (the separation rate) or because unemployed workers have a more difficult time finding new jobs (the hires or matching rate). The BLS only began collecting data in late 2000, much too late for us to compare the current downturn to previous episodes. Bob Shimer, however, has computed separation and matching rates going back to 1947 (his data is here). The data is not strictly comparable but I think we can use the lessons from Shimer’s data and apply them to the current episode.

I have spent a lot of time working with his data lately. The cyclical behavior of matching and separation rates is remarkable and should provide the key to the next level of understanding in business cycle research.  The more I work with this data the more I feel like I am beginning to understand consumer behavior during recessions. 

Matching rates, the probability of finding a job conditional on unemployment, begin to fall well before recessions begin and continue to fall well after the recession ends. Separation rates tend to rise at the beginning of recessions and tend to fall well before the end of the recession. Not surprisingly, the worst recessions in the post-war era (1958, 1982) are characterized by large changes in both rates.

In every post-war recession, the separation rate returned to more-or-less its long term average 4-to-6 months before the trough. The fall in separation rates also coincides with a rise in consumption. Apparently, consumption begins to rise once employed households no longer fear unemployment – a rational outcome. Consumption rises before unemployment falls.  Unemployed workers continue to have trouble finding work long after the recession ends.  But, their consumption is small and stable.  Employed worker consumption rises. 

As a result of this research, I am beginning to have more faith in the signals emitted by the JOLTS data. First, take a look at the picture below. The picture shows the number of hires each month in the JOLTS data from late 2000 to January 2009. Amazingly, the number of hires began to fall as early as January 2006, the same month the housing market turned sour.  This data is consistent with the duration of unemployment calculated from the household survey.  The average duration of unemployment is now at a record high, implying a record low probability of finding a job conditional on unemployment. 




 This is the clearest piece of data I have yet come across to indicate that the collapse of the housing market was not a random event. The decline in hire rates reduces the permanent income of households. People realize that conditional on losing their job, new work will be harder to find. Households also seem to know that this trend tends to have long cyclical properties – a decline in the series today is likely to signal a long period of increasingly lower matching rates.

Of course, I want to know if the recession is over, or if the recession has yet to end, when it is likely to end.  Take a careful look at the very end of the hires graph.  Hires spiked upward July but have since fallen back.  Granted the fallback is only two months worth of data, but it is consistent with a labor market that tried to improve and then suffered a setback.  This is consistent with employment data (discussed here) and it is consistent with the picture from the separation rate. 

As I showed in March, the separation rate the total number of separations has been steadily falling since early 2007. This data alone would indicate that flows into unemployment should be falling, quite the opposite of our experience over this period.  Again, note the July bobble in separations.



To understand the labor market, separations must control for the voluntary versus involuntary separations. If I quit my job today, knowing I had a new job in the bag, I would show up first as a separation then as a hire.  We care only about involuntary separation. To get a better picture, subtract the number of monthly quits from total separations.  The resulting picture, shown below, gives a completely different view of the state of the labor market.

The level of separations in January 2009 was 35 percent higher than its 2001-07 average level.  Keeping in mind that half of that time period was during bad labor markets, this statistic is quite stunning.  The labor market has improved since January.  However, the recovery seems to have stalled and over the past 4 or 5 months the number of involuntary separations has achieved a plateau 17 percent above pre-recession average. 

This plateau also indicates a recovery stalled.  While we do not have a sufficiently long time series to know the behavior of this series in previous recessions, Shimer’s separation rates fall sharply before the end of recessions and remain low thereafter.  The high level of involuntary separations is not consistent with recovery.  This data is giving the same signal as initial claims data.  Initial claims are down sharply from their peak but remain extremely high compared to their historic average. 

Casey Mulligan, a Chicago economist, notes in his blog (and more recently here) that consumer spending is rising as is disposable income even as the job market continues to deteriorate.  In particular, he has been keen on noting the ongoing increases in personal income.  He does realize that personal income includes transfers (at record highs) from the government.  I don’t think Casey Mulligan would really believe transfers accompanied by an increase in debt are an actual increase in income. 

Nonetheless, even as current income continues to rise, the high separation and low matching rates have sharply reduced permanent income for households – they are faced with an ongoing high probability of job loss and amazingly low odds of getting a new job if they become unemployed.  And, labor income is far and away the largest portion of permanent income for the vast majority of Americans.

Sunday, November 8, 2009

The Unemployment Rate: Moderation through Participation

Here is a long-form answer to NorthGG’s recent comment.
In October, the unemployment rate breached double digits for the first time since 1983. This number, 10.2 percent, seems bad—one out of every ten Americans is out of work but the number is deceptively benign. In this recession, more than at any other time since the early 1970s, declines in labor-market participation are moderating the unemployment rate.

The unemployment rate including all workers who have left the labor force in the last year is currently about two percentage points higher than the official rate. That is, the drop in participation is currently contributing about 2 percentage points to the unemployment rate. Under this measure, the unemployment rate is currently at a record high.

The contribution from flows out of the labor force is about the same as in the early 1970s. There is an important difference in the current situation and the difference is critical for the long-term outlook for the U.S. economy. In the early 1970s, the baby boomers were just entering the workforce. The drop in participation came as boomers left the labor market to remain in school. They went to school both for economic reasons and to avoid the draft. But, this education pool of workers has been a boon for the U.S. economy and is likely, in part, responsible for the emergence of the U.S. as an economic superpower.

Now, though, the decline in participation is not, for the most part, being driven by the young. It is being driven by the old. The workers leaving the workforce are older. The largest contribution to the decline in participation is among workers between the ages of 45 and 55.

These workers are in the prime earning years. They do not leave the labor force lightly. I suspect that the majority of these workers had jobs that no longer exist. Many of these workers will have to find jobs in new industries. A lot of their industry-specific human capital has been destroyed. Almost certainly, at least for a time, the new job will pay less than the old job. Even when they eventually find work, they will be a drag on growth.

What do we do with the large mass of dislocated workers over the age of 45? They are too young, and too poor, to retire. I don’t know the answer but I have a feeling that without this answer the U.S. economy is not going to remain an economic superpower for long.

To formulate policy, we need data. We do not know why these workers are not working. We need to find out who these workers are. What jobs did they hold? Why are they no longer working? What skills do they have? What skills do they need? Are there similar workers in similar circumstance that have managed to stay working? Why did one group perform well and another poorly?

Once we have the answers to these questions then, and only then, we can begin to formulate a policy response. With funding the BLS and the Census Department could answer these questions in a few months. There is no point in throwing money desperately at job creation programs until we understand the source of the jobs problem.

Saturday, November 7, 2009

The Employment Situation: Bad News for a Recovery

The economy lost 190,000 jobs in October, a significant improvement from early this year, but losses remain near the peak of the last two recessions. More importantly, several pieces of the jobs report point to a possible deterioration in the labor market and are unambiguously bad for the economic outlook.

Although the labor market has improved substantially since early this year, over the past three months, job losses stabilized around 200,000. We have never had an economic recovery with job losses at this level. I find it beyond belief that the economy is in the midst of a recovery with these losses.

Most forecasters a jobless recovery has already begun. But a jobless recovery is characterized by a weak but stable labor market, a market where losses have ended but gains have yet to occur. An economy can, apparently, muddle along without job growth; it cannot grow with large job losses.

So, even at this level of losses, an economic recovery is not in the cards. But, more worrisome, other indicators of the labor market are much weaker than the establishment survey and some of these indicators point to accelerating losses.

Initial Claims Remain Weak

I wrote almost a year ago on the strong long-term link between initial claims per month and job losses per month. At the time, claims were accelerating sharply and were pointing to unheard of job losses. Initial claims have, of course, improved. But they have not fallen quickly or robustly. Initial claims seem persistently stuck above 500,000 per week.

These initial claims are consistent with job losses between 300,000 and 450,000 per month.

The Household Survey is a Disaster

Once again, job losses as measured by the household survey are outpacing job losses from the establishment survey by a substantial number. From February to June, the household survey and the establishment survey were, on a twelve-month change basis, showing essentially the same job losses. However, since mid-summer, the household survey has pulled ahead by 878,000 about 292,000 extra losses a month.

As I wrote (here), the month-to-month changes in household survey employment are not a reliable indicator of labor market conditions. The survey is subject to substantial sampling variation and month-to-month changes can be absurdly misleading. I have found, however, that whenever the household survey jumps ahead of the establishment survey, in either direction, it tends to predict both the direction of the labor market and the sign of future revisions to the establishment survey.

The household survey, consistent with the claims data, is pointing to a much weaker labor market than is the establishment survey.

A Recession Dummy in the BEDS Model?

I don’t know why the establishment survey is so much stronger than other labor-market indicators. But, I do have a suspicion. A long time ago, I wrote about the Birth and Deaths model used by the BLS to adjust the establishment data. Essentially, the BLS uses this model to control for the number of businesses being created and destroyed every month. I suspect, but do not know, that the BLS is uses a recession dummy in the model. The recession dummy, if it exists, is important and likely substantially improves the performance of the model.

I suspect the BLS turned off the recession dummy in the third quarter. Without the recession dummy in place, the model will, for any given read of the source data, produce fewer job losses. Remember, I do not know that they use one. But, if I were using a model to estimate losses, I would include one. So, I suspect that they have one.

Takeaways

We cannot have a recovery, jobless or otherwise, if the economy continues to shed jobs at a rate of 200,000+ per month. The best indicators do not at the moment point to any further near-term improvement in the labor market. Until we see some substantial improvement in the labor market (at least 0 losses), the economy cannot recover.

Saturday, October 31, 2009

Doing the Math: The Fiscal Multiplier Effect of the 2009 American Recovery and Reinvestment Act

Fiscal stimulus is everything I always dreamed it would be.

Any regular reader of this blog knows my opinion of fiscal stimulus and fiscal multipliers. I have shown evidence (here and here) that fiscal multipliers must be below 1 and are likely closer to zero or even negative. Pushing against this belief is the recent performance of the economy. GDP grew by a very healthy 3.5 percent in the third quarter, boosted by gains ranging from private consumption, to residential investment, to direct government expenditures. According to the Vice President, the increase in GDP is entirely attributable to the stimulus efforts by the administration.

I am inclined to agree.

I believe that in the absence of government stimulus the U.S. economy would have continued to contract in the third quarter. What’s more, I say this without changing my views on the multiplier. How is that possible? Let’s do some math.

The following table shows the GDP growth that would have occurred in the absence of fiscal stimulus under different assumptions for what counts as government stimulus using a multiplier of 1 (my maximum) and a multiplier of 3 (Romer’s base case). Because there is considerable uncertainty over the timing of the stimulus, I show the four-quarter change in GDP through the third quarter. Over this time period, GDP fell by 2.3 percent. The numbers in the table show the four-quarter change without stimulus and should all be viewed relative to the 2.3 percent fall.

The first row of the table assumes that the sum total of fiscal stimulus is the pay out from the ARRA. According to data from Recovery.gov, as of October 29, the government had actually spent $173 billion (this number includes tax relief and spending). This is the most conservative estimate of stimulus spent to date. (Romer would include both actual spending and money allocated ($310 billion). I agree with her but want to use the smallest number to start. My numbers get bigger fast anyway.) Assuming a multiplier of 1 ($173 billion spent adds $173 billion to GDP), counterfactual GDP growth is -3.6 percent. With a multiplier of 3, the counterfactual falls to -6.2 percent.

I find both of these numbers credible.

I actually believe GDP would have fallen more than 6 percent in the absence of the programs. And, if I believed that total stimulus was $173 billion, I would also have to join Romer in the multiplier is greater than 3 world. Fortunately for me (I am not the introspective type), the total amount of stimulus is much greater than $173 billion.


There are lots of ways of accounting for government stimulus, but the most time honored and method (used by the IMF, the OECD, and the Federal Reserve) is to simply examine the change in the government balance. Over the last four quarters, publically held debt rose by $1,732 billion. (Really, we should only count the increase in the growth rate; but over the previous four quarters, debt only increased by $280 billion.) This number is the correct measure of stimulus. It includes all Federal government spending and all implicit reductions in the tax burden. The number is still conservative because it does not include the increase in state borrowing, an additional $50 to $200 billion.

Using this number and a multiplier of 1, yields a counterfactual GDP growth of -15.4 percent. With a multiplier of 3, the number falls to -41.6 percent. These numbers are beyond the pale. GDP never fell by more than 40 percent in the Great Depression. Using this number and my belief of a counterfactual fall in output of 10 percent, the current multiplier is negative.

But, there is also the issue of the Fed’s balance sheet. The Fed has pumped almost $1,000 billion into the economy over the same period. This is measured by the expansion of the Fed’s balance sheet. There is no difference between receiving a tax break for $1 trillion dollars and receiving a $1 trillion dollars in cash from the Fed. We can argue about effectiveness but that is the exercise here.

Adding the Fed’s balance sheet expansion to the calculation, yields a counterfactual GDP growth of -22.4 percent. With a multiplier of 3, we have the absurd number of -62.5 percent. I believe that latter number would be a modern-era record. I suspect we would have to go back to the plague years in Europe to find an equivalent fall. I would love to see Summers or Romer stand up and make the case for this counterfactual.

So like many Banana Republics before us, we have managed to spend enough to turn GDP growth positive. But, the cost of achieving this number has been phenomenal. To achieve a paltry $212 billion increase in real GDP, we spent about $2 trillion dollars. This gives us a net return of 20 cents on the dollar.

Yes, GDP would have fallen without the spending. The probability that this recession would have scored as a depression in the absence of stimulus is high.

Was it worth it?

Housing Tax Credit: Did it boost the housing market?

The $8000 first time home buyers tax credit seems to have boosted the sales of housing. The following graphic gives my estimates of the total impact on the value of homes sold since the passage of the bill in January.

Including both new and existing homes and valuing the sales at their median price, total housing sales increased by about $16 billion through September. It is, of course, very difficult to measure how much of the increase is attributable to a natural increase in demand and how much is attributable to an increase in demand derived from the tax credit alone. I estimate two distinct effects: the direct effect of the extra money pouring into the housing market and the indirect effect from the induced change in prices.
In normal times according to the National Association of Realtors, about forty percent of sales go to first time home buyers. The tax credit likely increased this number. I don’t know how much but I assume 50 percent is a conservative number. Under this assumption, total outlays under the tax credit were $14.1 billion through the end of September, slightly more than the CBO’s scoring of the program ($11 billion) and slightly less than NAR’s estimate ($15.2 billion).

I assume that the $14.1 billion has a direct one-for-one impact on the demand for housing. This impact is shown in the figure as the difference between the solid black and the dashed black lines. The estimated impact has grown through the year. The sharp rise between April and June reflects ordinary seasonal fluctuations in demand. These data are not seasonally adjusted.

In addition to the direct effect, the subsidy has an indirect effect through induced price changes. Given an estimated price elasticity of demand, the subsidy pushed average house prices up by about 4 percent (My estimate is just below that of Goldman Sachs who estimated a little more than 5 percent. They must have estimated a slightly higher demand elasticity.) This price effect induces sales of existing homes. Because of the higher price, existing home owners have an incentive to sell their house and either rent (less likely) or buy a new home (more likely). This impact is small relative to the direct effect and is shown as the difference between the dashed red and dashed black lines.

In total, I estimate that the existence of the subsidy has boosted the value of housing sales between January and September by about $17.3 billion.

There is no question that this tax break helped the housing market. An extra $17 billion likely kept some home builders in business and it must have paid the bills of quite a few real estate agents. Whether or not the program was worthwhile depends on the public policy decision of whether or not we want to subsidy the housing market. There are no macro side effects of the program as designed.

Whether or not the tax credit is extended (and it looks like this is a done deal), the housing market is likely to decline in the coming months. Because the program was expected to expire this month, most households eligible for the program and able to buy a house have already taken advantage of the tax credit—see the 9 percent surge in existing home sales in September. The majority of these households would have bought a house sometime in the next year without the subsidy and the tax credit simply changed the timing of their decision. With the extension, I expect the value of sales to fall by about $9 billion over the next several months and to fall by the remainder as the credit is phased out.

Of course congress looking ahead to midterm elections hopes that underlying demand for housing will surge by the time the credit expires, disguising the tax credit induced slump. It is possible but we are going to have to see a more robust labor-market recovery before this can happen.

Thursday, October 15, 2009

Estimated Taylor Rules: A Good Fit for Monetary Policy?

Several economists have recently used the Taylor rule to comment on the appropriateness of monetary policy (Rudebusch, Calculated Risk, Krugman, Altig). In some form or another, they regress the Fed Funds rate on a constant, the lagged policy rate, lagged inflation, and some measure of economic slack. They find that the fit of this equation over the last 20 years has been quite good, and that therefore, monetary policy has been appropriate.

In fact, the fit of these lines is so good that I have become a bit suspicious over the exercise. I decided to do two things: 1) I would extend the forecast back to the early 1950s and 2) I would drop any measure of economic slack.

The reason for the first is obvious: The Fed seems to have made systematic policy mistakes during the 1970s after performing admirably in the 1960s. Then, if the Taylor rule is to be a decent guide as to the appropriateness of policy, it had better deviate systematically in the 70s.

The reason for the second is both econometric and philosophical. If the lagged policy rate responds to economic slack and slack evolves on slowly, the system is over-identified: we are more or less estimating an identity. Philosophically, economic slack is impossible to measure. We have no idea how to measure it or how this measure would relate to inflation. (I know: there are a thousand ways to measure slack. I am just saying none of them are meaningful.)

The result of the exercise is shown below. I too was stunned.

The estimated Fed funds rate lies almost exactly on top of the effective funds rate, shown as a dashed line. Either this is a meaningless measure of policy or monetary policy has been equally appropriate from 1956 through 2009. I choose the former as the more reasonable explanation.

I am not saying monetary policy has been bad of late. I am saying that using a Taylor rule to evaluate policy is meaningless. (Not the theory behind the Taylor rule; just the practice of estimating such a rule.)

What gives?: The Taylor Rule is an Identity

The Taylor rule is really something of an economic identity rather than a model of economic decision making. In effect, estimating a Taylor rule is no different than estimating the national income identity and finding out that Y really is equal to the sum of its parts.

To see this, use the old quantity theory equation:

P = v(φ)*M/Y

Prices are a function of Money (M), the quantity of goods in the economy (Y), and perhaps some measure of the current state of the economy (v(φ)), where the variable φ simply stands in for the current state of everything not written explicitly.

This equation is old fashioned but is fundamentally an identity: more money chasing fewer goods leads to higher prices.

We can transform the identity by taking logs and then first difference the equation.

dln(P) = dln(v(φ)) + dln(M) – dln(Y)

Then rearrange the equation to put money on the left hand side:

dln(M) = dln(P) + dln(Y) – dln(v(φ))

This is the Taylor rule.

Wait! We started with an identity (of sorts) and we ended with a Taylor rule. Logic then insists that the Taylor rule is an identity.

Okay, I hear you. The Fed does not target the money supply; the Fed targets a nominal short-term interest rate and the Taylor rule uses a short-term interest rate not the supply of money.

To see the link, how does the Fed enforce its nominal target. The Fed buys or sells bonds until short bonds are trading at the desired price. What does it use to buy and sell bonds? Money, of course.

So, the money supply is determined by the desired interest rate. The Fed, in effect, adjusts the money supply such that the short-term bond market is in equilibrium at the desired policy rate. M = f(r) and r = G(M). Where G is the inverse function of f.

Then, we have finally derived the Taylor rule.

dln(f(r)) = dln(P) + dln(Y) – dln(v(φ))

Finally, it is a small step to transform the above equation to the more familiar:

rt = αrt-1 + ϐ1Inft + ϐ2Growth Rate of Outputt – ϐ3Potential Somethingt

The coefficients in the equation can be estimated to minimize the information loss in the last transformation. The final term simply reinterprets our state variable. Recall, v(φ) was just some term reflecting the economic environment. In my specification, I set this coefficient to zero. In a standard specification, the restriction sets this to some measure of potential output. We don’t have to argue over signs or magnitudes because we may freely estimate the coefficients.

The Taylor rule fits because it is an identity.

The Quantity Theory

For those of you who don’t believe in money or at least who don’t believe in the link between inflation and money. Take a look at the following picture. The graph shows the five-year change in M2 divided by output against the five year cumulative change in the CPI. If M/Y rises rapidly, so do prices.

NorthGG asked if the relationship is between core or headline. I would say: yes. The five-year changes should completely eliminate the influence of high-frequency changes in food or energy. Only prolonged increases in either food or energy prices will show through. And, prolonged increases are also known as inflation.

Monday, October 12, 2009

Is High Inflation Likely?

The Statement

In his blog, Krugman dismisses the possibility of inflation. He goes farther and calls the Fed irresponsible for even considering the possibility of inflation. Krugman’s analysis is completely misleading over the prospects of inflation. He is using a backwards looking indicator that ignores changes in current policy. I too believe that high inflation outcomes are likely avoidable. Krugman seems to believe high inflation will be avoided even if the Fed leaves rates at zero forever. I believe that positive action (tighter policy) on the part of the Fed will be necessary to avoid inflation.

The Method

To see the difference in our beliefs, let’s work within Krugman’s framework. Krugman proposes the Taylor rule as his model of monetary policy and uses the parameters estimated by Glen Rudebusch at the SF Fed to judge the current, appropriate policy stance. Krugman uses the following Taylor rule:

Target fed funds rate = 2.07 + 1.28 x inflation - 1.95 x excess unemployment

Rudebusch’s weight on the unemployment gap is much higher than most other estimates. Taylor called for a coefficient of 0.5 on excess employment relative to a coefficient of 1.0 on inflation, implying the Fed should care twice as much about inflation as unemployment. Krugman believes the Fed should only care about 2/3 as much about inflation as the unemployment gap. The change in weights is quite significant.

In Krugman’s specification, with current inflation at -.02 percent (core PCE, 4-quarter change), the first two terms imply a Fed funds rate of positive 2.1 percent. The unemployment gap is then driving the current negative policy rate. With current unemployment around 9.8 percent (it was only 9.3 percent in Q2) and using the CBO’s pre-recession estimates of the NAIRU, the implied policy rate is very, very negative; indeed, much more negative than Krugman reports, negative 7.7 percent. If we had used more standard weights and a constant of 2, the implied policy rate is just barely negative, -0.5 percent.

But, to be honest, I am with Krugman and don’t see any justification for Taylor’s weights. The Taylor rule is not a model of the policy rate but was rather designed as a descriptive rule for understanding policy setting. Given this view, we are free to estimate rates and if I estimate the Taylor rule I arrive at coefficients closer to Glen’s than to Taylor’s.

The Mistake

If you think the Taylor rule was a good guide to policy in the past, the Fed shouldn’t start to raise rates until the rule starts, you know, yielding a positive number.
The first part of the quote is wrong and the second part puzzling.

The Taylor rule was not a good guide to policy in the past: The Taylor rule had been a good description of past policy. The two statements are not equivalent.

The Taylor rule is essentially a linearized equation from a specific model of Fed policy and inflation and resource slack. The Taylor rule does not describe a fundamental relationship between policy, output and inflation. There are many models of output and inflation.

In particular, we can modify Krugman’s Taylor rule to make it forward looking. An optimally behaving Fed will set policy not based on the past behavior of variables but rather on their forward-looking expectations.

Money Matters
For some reason many Fed officials seem to view it as inherently unsound to stay at a zero rate for several years running — but I’m at a loss to understand what model, or even conceptual framework, leads them to that conclusion.
Maybe some of the Fed officials remember the adage ardently espoused by Friedman: Inflation is always and everywhere a monetary phenomena.

Krugman, along with the rest of the Fresh Water economists, seem to have completely forgotten about the link between money and inflation. Lucas, Freidman, Smith, Hume, and yes even Keynes believed in a link between the quantity of money and inflation. Lots of money: lots of inflation.

Every scholar who has ever seriously examined the relationship between inflation and money has found a positive relationship. Lucas found a positive relationship using both U.S. and international data. Friedman found the same in 1960s for data sets running from the mid 1880s to the early 1960s. Both Hume and Smith believed in a positive relationship between money and inflation, although there use of data was more anecdotal and conjectural than rigorous.

The most recent study of inflation and money, of which I am aware, was presented last Thursday at a Federal Reserve conference. “Money and Inflation,” by Bennet McCallum and Edward Nelson, finds a consistent positive relationship between money supply and inflation: money raises inflation about 1-for-1 with a two year lag.

The Fed has increased the money supply substantially over the last two years. According to the Federal Reserve’s H.6 release, M1 has increased 18.6 percent over the past twelve months, while the broader M2 has risen 7.8 percent. These are extremely high growth rates. According to the work of McCallum and Nelson, this growth rate will lead to inflation one to two years from now.

If we replace Krugman’s backward-looking PCE with a reasonable forward looking expectation of inflation driven by the increase in the money supply, we find that between one and two years from now the policy rate had better be above 2 percent. Further, if we believe the results of McCallum and Nelson, the policy rate probably needs to start increasing now. That is, the money supply has to be reduced now to avoid the high inflation in the future.

This is the model policy makers likely have in mind when they call for tighter policy.

I don’t think there is any particular rush to raise rates. I think the Fed can afford to be patient and watch the data.

Conclusion

Krugman and I agree: There is very little likelihood of high inflation. Krugman believes in the Taylor rule and so a passive Fed can achieve this outcome. I believe in money and so an active Fed will achieve this outcome.

Krugman’s own Taylor series framework implies high inflation within one to two years, if the Fed is passive. Fortunately, the Fed is not passive. The Fed has the power to control inflation. It simply has to withdraw the liquidity in a timely manner.

Saturday, October 10, 2009

Gold Bugs, Exchange Rates, and Monetary Policy

Beware the dollar hawks says Krugman in a recent post to his blog. The dollar is depreciating quickly and many (Krugman’s many I have not fact checked) are calling for somebody to do something about it. Krugman believes there is a danger in this call. And for once, I am in unambiguous agreement with him.

Using monetary policy to control the value of the dollar would be a policy mistake.

The Fed is already trying to do too much with a single policy tool. Adding yet another criterion to their already long list is too likely to lead to policy mistakes. The Fed needs to keep its eyes on the balls already in the air and not add balls to impress like a foolish street juggler.

But, at the same time, the Fed should not disregard the exchange rate as a signal of overly loose policy. The exchange rate is perhaps the best, broadest, and most flexible dollar denominated price. A depreciation of the dollar is inflation—the dollar price of foreign goods goes up. It reflects the average relative price of dollar goods. Amongst their other price signals, the Fed should monitor the exchange rate. The value of this mechanism as a price signal ultimately lies behind the current dispute amongst various members of the FOMC, no different than the debate 4 years ago on the value of the Cleveland Fed’s median inflation rate.

The danger of the Fed ignoring these signals is exceptionally high at the moment. There is a group of economists (Roubini talks about this all the time.) that have been looking for a decline in the real value of the dollar for a long time. They view the real value of the dollar as an equilibrating mechanism to adjust the pattern of global demand. If the Fed shares these beliefs, they may confuse a nominal movement in the dollar with the long-looked-for real depreciation.

Remember, any price has two components, real and nominal. The real component reflects an equilibrium between supply and demand (of and for the good). The nominal value reflects and equilibrium between the good and money. The Fed controls the latter (to an extent) and never the former.

To tie it all together, the Fed should be careful not to confuse the real and nominal value of the dollar. From their perspective, unexplained movements in the dollar are probably nominal.

Watch the value of the dollar but don’t target it.

Wednesday, September 30, 2009

The True Impact of Fiscal Stimulus

There are currently a number of economists crowing over the absolute success of fiscal policy. I am still unconvinced that fiscal policy has any meaningful impact on economic output. I continue to believe that the programs to date have merely masked the underlying weakness in the economy.

“Masked the underlying weakness – Isn’t that exactly the definition of fiscal stimulus?” you ask.

The answer is not necessarily. To explain my views here, let’s turn to the case of China, the current poster child for successful fiscal and monetary stimulus and a crisis already occurring. In the second quarter, GDP increased at a rate someplace in the high teens. (China only issues real GDP on a four-quarter change basis; so, quarterly changes must be inferred.) The growth rate was phenomenal and is directly attributable to government intervention.

How did China achieve this remarkable growth? Easy. I believe that the government simply insisted that factories continue producing output—I am sure the insistence was accompanied by a promise of a fiscal transfer. Factories keep producing output, calamity averted, the stimulus is effective.

In fact, GDP gets an additional boost. The factories are producing output that nobody is buying – exports remained depressed and consumption is not picking up all of the slack. The output from the factories is accumulated as inventories, a positive contribution to GDP. But, since nobody is buying the price of the inventories also falls. Real GDP gets an arbitrarily large boost as the price deflator declines. [I am exaggerating for hyperbole. I know some of the goods were purchased but the increase in output likely owes to exactly the channels I am suggesting. If you want to see this at work in the United States, go back to the fourth quarter of 2006. Autos made a large positive contribution to GDP as the prices of new cars fell sharply and the real value of car inventories was pushed up.]

This is not real growth. It is not real growth in the present and it is a direct drag on growth in the future. Barring a sudden increase in OECD demand, China is in for some hard times.

This is the nature of fiscal stimulus.

The Fiscal Cost of Stimulus

Paul Krugman and I posted almost opposite comments on fiscal spending yesterday (his; mine). (Trust me they are not dueling – Nobel Prize winning famous guy, me.) Whereas I was concerned about the long-term cost of higher government spending, he was blasé. From his article, “I’m not proposing a fiscal-stimulus Laffer curve here: it’s probably not true that spending money actually improves the government’s long-run fiscal position (although that’s certainly within the range of possibilities.)” He almost believes that fiscal stimulus (at least under certain conditions) is almost self financing.

I don’t believe it for a minute. But, Krugman’s views are amazingly internally consistent and there is no questioning his mental acuity.

He believes in large multipliers. That is, every dollar increase in government spending increases output by something much larger than a dollar. (He has publically averred to multipliers of around 1.3 but this would not be even in the ball park of self-financing. Spend $1 billion, get $1.3 billion. Spend $1 billion raise $100,000 in extra tax revenue. With our tax system, self-financing begins with multipliers greater than 3.)

If Krugman is correct on the multipliers then he is correct on the cost of the fiscal stimulus. If not, …

Tuesday, September 29, 2009

The Federal Deficit and the Tax Burden: We can afford the debt not the spending.

Over the past year, the amount of publically held Treasuries, the current debt of the U.S. government, has ballooned. Debt held by the public has risen from 56 percent of GDP (already a near record) to over 63 percent of GDP in 2009. Importantly, this increase has occurred before most of the stimulus money from the American Recovery and Reinvestment Act has been spent; indeed, of that money, a meager $151 billion had been drawn as of the end of August. With this large debt come concerns over fiscal sustainability.
I have always taken it for granted that the debt was sustainable. After all, why would the markets lend to the government if the debt were not sustainable. Recent events (and Krugman’s excellent ariticle in the NY Times on the state of macroeconomics—Summers’ ketchup economics is brilliant.) have shaken my outright confidence in markets. So, I thought I should do my own sustainability calculations. I came to the surprising conclusion that the stock of government debt is not only sustainable but it is downright affordable.

To be conservative in my estimates, I use the entire amount of Treasuries outstanding. Publically held debt is a better measure of government debt but either will do for my purposes—funds held by the Fed and other public agencies don’t really count. In my mind, counting them is the same as counting the total sum of Treasuries that could be issued.

To check for sustainability, I take the Federal Government in hand and put it on a debt payment plan. If the Feds were a household and we were financial managers we might choose a 10 to 30 year plan depending on their age and circumstance. But, governments are special. I chose to put the gov’t on a 100-year payment plan at the end of which time debt must be zero. Of course with the payment plan the government is forbidden to acquire new debt—this feature will be the lemon in the pudding.

If real interest rates stay at their current low levels of around 1 percent (unfathomable), the payment plan costs each of the 137 million workers in the United States $1,361 per year, a paltry 1.7 percent of personal income. Even if real interest rates rise as high as 10 percent (equally unfathomable (or maybe I can picture this one)), the debt burden per year remains an affordable $8,248 per year per worker or about 10 percent of personal income. The latter number is large but feasible.

The real problem is that the government must also raise sufficient funds to keep from borrowing more. The Bush administration ran the largest deficits of any government to that time. If we add the Bush deficits to the total, yearly payments need to rise. Under the low interest rate case, this raises the payments to a little more than $3,000 per year per worker. At 10 percent, we are quickly approaching $10,000 per worker per year.

The Obama deficits are projected to be much, much larger. If we have to raise revenue to offset the average Obama deficit over his first term as calculated by the CBO, even at 1 percent interest, the debt payment is $6,326 per worker. At 10 percent interest, payments per worker increases to almost 17 percent of personal income.

This exercise reveals the national debt to be affordable in the ordinary sense of the word. The United States could pay off its debt in a mere 100 years with only a modest strain on workers. However, if government spending continues to rise (or stays at the current levels), the strain on workers is likely to extreme. Additional taxes in excess of 10 percent of total personal income are likely to have a large negative impact on the workforce.

I might be in favor of higher government spending or I might be against it. As always, the decision is one of public policy and not of economics. But, good economics should inform the decisions and we should make an informed choice on how much to spend based on an honest national dialogue.

The Strength of the Recovery: Demographics could kill it all.

Mussa gave an interesting talk on the recovery the other day at the Peterson Institute. You can find his paper on their website. He makes an excellent case for a V-shaped recovery. With his forecast of U.S. growth at 4.5 percent for 2010, he makes the claim that his estimate is actually conservative. That, in reality, we could, and likely should, look for an even steeper recovery. In particular, he points to the fact that in the average post-war recession average growth has been closer to 10 percent than 5.

In my last two posts, I gave a cycle-by-cycle breakdown of the key parts of GDP. The rate of recovery in individual components was not particularly strong. Adding up the various measures, the strength of various recoveries was accomplished because most of the rising demand coming out of recessions was satisfied from domestic sources. In this recession, any rising demand is likely to be satisfied to a large extent by production outside of the United States. Just as the downturn in the United States was cushioned by a fall in imports the rebound will be pushed down by imports.

But, in my mind, this is all accounting. The potential U.S. recovery is exposed to a much more serious threat: the U.S. consumer. In percentage terms, the rise in the savings rate in this recession far exceeds the rise at any other time in post-war history. Indeed, we have to go back to the 1930s to see a similar spike. Prior to this recession but still post-war, the largest increase in the savings rate was a little more than 30 percent. In this episode, the increase has been a little more than 80 percent. Granted, the calculation is from a very low base near 2 percent of income.
However, never in post-war history has consumption accounted for such a large portion of aggregate demand. I believe the large change in savings reflects a combination of cyclical forces and the beginning of a long-term consumer retrenchment. Americans must consume a smaller portion of their income.

With 1 in 6 Americans underemployed and with wage growth falling, consumption is likely to be a substantial drag on any recovery.

Yet, I believe Mussa would retort (he makes the argument in his paper) that consumption has fallen more than in any other recovery and we are due a bounce back. Indeed, so Mussa would say, following the recession of 1980 consumer spending bounced back with a vengeance.

Take a look at the following picture. Despite the financial crisis aspect of this recession, consumer credit was unimpaired relative to previous recessions. Real consumer credit has only nudged down. Compare this to the almost 15 percent fall in 1982. In 1982, real interest rates rose to record levels. Consumers stopped spending. Following the recession, the expansion of credit played a huge role in the rapid expansion of spending. Consumer credit grew at almost three times the pace of the average recovery. With almost no impairment in this recession, the room for a bounce back is smaller.
The resumption of consumption also faces a very unfavorable demographic situation. The chart below shows prime-age workers (30 to 65) as a percent of the total population. In 1982, this group of agents increased almost 8 percent over the first five years of the 1980s. In the current recession, this percentage falls remarkably over the forecast. Indeed, it falls faster than at any other time since at least before 1901. Prime-age workers are the most productive people in the economy. They also spend the most: they buy big houses and fancy cars. A fall in this group has wide-spread implications for the economy. There is only one other country I know of with this demographic pattern: 1990s Japan. That is not a good omen for V-shaped recoveries.

Just to finish the thought, here is the path of prime-age workers as a percent of the total population from 1970 through 2014. It seems that one source of our amazing growth rates from 1980 through 2007 was the rise of this group of workers. Notice, that during the productivity slowdown in the 1970s, this group of workers was flat or falling. During the 2000 recession and long jobless recovery that followed, this group of workers was also quite stagnant. For the next ten years or so, these workers will retire out of the labor force.

Most model-based forecasts assume that the fall in population itself will subtract only a few tenths from growth. With labor shares and capital constant, this is true. However, the fall in prime-age workers is also likely to be associated with a long-term downward trend in investment. We have already almost a decade of subpar investment growth. More importantly, the loss of human capital will be severe. It takes years to produce prime-age workers. These workers embody skills that have been acquired in a life-time of work. There loss is likely to drag on labor productivity for the foreseeable future.




Wednesday, September 9, 2009

Part I: The Great Recession of 2009: Where are we going?

Most analysts now believe the recession is either already over or at least kicking its last gasp. They might be right. But, I remember having this conversation 6 months ago; back in March when the PMIs first turned up. The end of the recession wasn’t in the data then and it is not obvious it is in the data now, although the odds have shifted towards recovery. The difference between now and then is time. We now, I believe, have sufficient data on hand to judge both the severity of the downturn and to compare this downturn to previous recessions. The comparison may shed light on both the likelihood of a near-term recovery and the potential shape of that recovery.

We examine, in turn, key elements of U.S. demand, running from investment to imports to exports to consumption. For each series, we plot data 12 quarters on either side of the end point of post-war U.S. recessions. The zero date is the last quarter of the NBER recession. The solid line is the average behavior of all post-war recessions excluding 2009. The dotted line shows the behavior of investment in the worst post-war recession defined as the recession with the largest peak-to-trough decline in the particular series being shown. All points on the dotted line belong to the same recession. The dashed line shows data for the current recession. We plot the data as if 2009Q2 is the end of the recession. All lines are indexed to 100 at last date of the recession. The indexing scales the graphs to show cumulative percent changes from the zero date. For example, a value of 120 in period -4 indicates a 16 percent fall in the series in the last year of the recession. While a value of 120 in period 4 indicates a 20 percent rise from the end date o the recession. All series are real.

We begin with machinery and equipment investment. I have long advocated that this recession is a manufacturing recession (see this post); therefore, in my view, the recession is unlikely to end without a recovery in this sector. Investment is one of the most volatile components of GDP, typically falling almost 10 percent over the last year of the recession. Post-recession, investment grows robustly recovering in level terms one year after the end of the recession.

The worst investment recession was 1958. Investment fell a little more than 18 percent with the decline occurring in a single year. Investment recovered in level terms just in time for the 1961 recession. Despite the faster-than-average growth upon exit, investment did not recover in level terms to its pre-recession peak until almost 2 years after the end of the recession.

The current recession has solidly replaced 1958 as the worst investment recession. Investment has, to date, fallen more than 20 percent from its peak 6 quarters ago. Investment does not currently show any signs of life; however, investment turns on a dime and typically turns up only once the recession is over. If history is a reliable guide, investment is likely to surge robustly in coming quarters. I believe a typical recovery is more likely than the sluggish recovery experienced after the 2001 recession. It seems a global manufacturing reshuffling is once again underfoot. Even though this reshuffling is likely to destroy capital (the shutting down of American car plants for example), it may also breed investment.
Like investment, trade has plummeted more in this recession that at any time in U.S. post-war history. Unlike investment, however, trade shows a clear, leading, end-of-recession pattern. The decline in imports tends to slow in the quarter or two before the actual end of the recession. Following the recession, imports tend to grow very rapidly, in part reflecting the strong trend in trade over the last 40 years. The worst import recession was in 1975.

In the current recession, imports have also surpassed all previous records, falling over 20 percent. As of the second quarter, imports still declined but the pace of decline was somewhat smaller than in previous quarters. This may signal the end of the recession or it may just be one of the inevitable bobbles in the data. In former case, the recession would end following the third quarter.
Of course given the above focus on investment, capital imports may be even more important than overall imports. The largest capital import recession occurred in the 2001 recession as the United States shed one-third of all manufacturing jobs and the tech expansion dissipated. The current recession has exceeded this high mark. Capital imports lead the cycle more strongly than overall imports. In past recessions, capital imports stopped falling entirely, on average, one to two quarters before the end date. Currently, capital imports have not yet truly flattened.

However, while NIPA capital imports did not stabilize as of the second quarter, capital imports have since leveled out. The following figure shows real capital imports to Japan, Canada, the Euro Area, and the United States. Only in Japan are capital imports actually rising. But, in all three other regions, capital imports have stopped falling. This may be a sign of the recession ending. If historical patterns hold true, this leveling would indicate a fourth quarter recovery rather than a third.


This post continues in Part II