Wednesday, December 2, 2009
Low Housing Inventories? Foreclosures continue to drive the market.
Friday, November 20, 2009
America’s Lost Decade Already Happened
Monday, November 16, 2009
Resolving the Foreclosure Crisis: What Can We Do?
Because no price index is perfect, especially when volumes are low, the government should reevaluate home values periodically, say quarterly. If homeowners cannot sell their houses at the declared value, the declared value is too high and must be lowered. If take up rates are high and homes are selling easily, the value may be too low.
Who would own the mortgage? The mortgages would be held, initially, by the government. But these notes do not have to be held. Since the mortgages are now above water and since the government is committed to repeating the plan if values should fall, the mortgages could be sold to private investors. If the plan is followed in its entirety, the mortgages would not even need a government guarantee.
How much would the plan cost? The cost of the program depends on take up rates, but it need not be expensive. Outstanding mortgage debt increased almost $4 trillion between 2005Q1 and 2008Q2. The average LTV for these mortgages was far less than 80 percent. Most of these households remain above water; indeed, the best guess is that 3 out of 4 of these households remain above water. With a 100 percent take-up rate amongst under-water households, the cost of the program would be roughly $100 billion. Most likely, the final bill would fall between $50 and $100 billion.
Wednesday, November 11, 2009
Separations and Hires: Has the Recovery Stalled?
The JOTLS data (find the data here) produced by the BLS gives the best insight into the current state of the job market. As Robert Shimer, a professor at the University of Chicago, showed some time ago, unemployment can go up either because workers become more likely to lose their jobs (the separation rate) or because unemployed workers have a more difficult time finding new jobs (the hires or matching rate). The BLS only began collecting data in late 2000, much too late for us to compare the current downturn to previous episodes. Bob Shimer, however, has computed separation and matching rates going back to 1947 (his data is here). The data is not strictly comparable but I think we can use the lessons from Shimer’s data and apply them to the current episode.
I have spent a lot of time working with his data lately. The cyclical behavior of matching and separation rates is remarkable and should provide the key to the next level of understanding in business cycle research. The more I work with this data the more I feel like I am beginning to understand consumer behavior during recessions.
Matching rates, the probability of finding a job conditional on unemployment, begin to fall well before recessions begin and continue to fall well after the recession ends. Separation rates tend to rise at the beginning of recessions and tend to fall well before the end of the recession. Not surprisingly, the worst recessions in the post-war era (1958, 1982) are characterized by large changes in both rates.
In every post-war recession, the separation rate returned to more-or-less its long term average 4-to-6 months before the trough. The fall in separation rates also coincides with a rise in consumption. Apparently, consumption begins to rise once employed households no longer fear unemployment – a rational outcome. Consumption rises before unemployment falls. Unemployed workers continue to have trouble finding work long after the recession ends. But, their consumption is small and stable. Employed worker consumption rises.
As a result of this research, I am beginning to have more faith in the signals emitted by the JOLTS data. First, take a look at the picture below. The picture shows the number of hires each month in the JOLTS data from late 2000 to January 2009. Amazingly, the number of hires began to fall as early as January 2006, the same month the housing market turned sour. This data is consistent with the duration of unemployment calculated from the household survey. The average duration of unemployment is now at a record high, implying a record low probability of finding a job conditional on unemployment.
Of course, I want to know if the recession is over, or if the recession has yet to end, when it is likely to end. Take a careful look at the very end of the hires graph. Hires spiked upward July but have since fallen back. Granted the fallback is only two months worth of data, but it is consistent with a labor market that tried to improve and then suffered a setback. This is consistent with employment data (discussed here) and it is consistent with the picture from the separation rate.
As I showed in March, the separation rate the total number of separations has been steadily falling since early 2007. This data alone would indicate that flows into unemployment should be falling, quite the opposite of our experience over this period. Again, note the July bobble in separations.
To understand the labor market, separations must control for the voluntary versus involuntary separations. If I quit my job today, knowing I had a new job in the bag, I would show up first as a separation then as a hire. We care only about involuntary separation. To get a better picture, subtract the number of monthly quits from total separations. The resulting picture, shown below, gives a completely different view of the state of the labor market.
The level of separations in January 2009 was 35 percent higher than its 2001-07 average level. Keeping in mind that half of that time period was during bad labor markets, this statistic is quite stunning. The labor market has improved since January. However, the recovery seems to have stalled and over the past 4 or 5 months the number of involuntary separations has achieved a plateau 17 percent above pre-recession average.
This plateau also indicates a recovery stalled. While we do not have a sufficiently long time series to know the behavior of this series in previous recessions, Shimer’s separation rates fall sharply before the end of recessions and remain low thereafter. The high level of involuntary separations is not consistent with recovery. This data is giving the same signal as initial claims data. Initial claims are down sharply from their peak but remain extremely high compared to their historic average.
Casey Mulligan, a Chicago economist, notes in his blog (and more recently here) that consumer spending is rising as is disposable income even as the job market continues to deteriorate. In particular, he has been keen on noting the ongoing increases in personal income. He does realize that personal income includes transfers (at record highs) from the government. I don’t think Casey Mulligan would really believe transfers accompanied by an increase in debt are an actual increase in income.
Nonetheless, even as current income continues to rise, the high separation and low matching rates have sharply reduced permanent income for households – they are faced with an ongoing high probability of job loss and amazingly low odds of getting a new job if they become unemployed. And, labor income is far and away the largest portion of permanent income for the vast majority of Americans.
Sunday, November 8, 2009
The Unemployment Rate: Moderation through Participation
In October, the unemployment rate breached double digits for the first time since 1983. This number, 10.2 percent, seems bad—one out of every ten Americans is out of work but the number is deceptively benign. In this recession, more than at any other time since the early 1970s, declines in labor-market participation are moderating the unemployment rate.
The unemployment rate including all workers who have left the labor force in the last year is currently about two percentage points higher than the official rate. That is, the drop in participation is currently contributing about 2 percentage points to the unemployment rate. Under this measure, the unemployment rate is currently at a record high.
The contribution from flows out of the labor force is about the same as in the early 1970s. There is an important difference in the current situation and the difference is critical for the long-term outlook for the U.S. economy. In the early 1970s, the baby boomers were just entering the workforce. The drop in participation came as boomers left the labor market to remain in school. They went to school both for economic reasons and to avoid the draft. But, this education pool of workers has been a boon for the U.S. economy and is likely, in part, responsible for the emergence of the U.S. as an economic superpower.
Now, though, the decline in participation is not, for the most part, being driven by the young. It is being driven by the old. The workers leaving the workforce are older. The largest contribution to the decline in participation is among workers between the ages of 45 and 55.
These workers are in the prime earning years. They do not leave the labor force lightly. I suspect that the majority of these workers had jobs that no longer exist. Many of these workers will have to find jobs in new industries. A lot of their industry-specific human capital has been destroyed. Almost certainly, at least for a time, the new job will pay less than the old job. Even when they eventually find work, they will be a drag on growth.
What do we do with the large mass of dislocated workers over the age of 45? They are too young, and too poor, to retire. I don’t know the answer but I have a feeling that without this answer the U.S. economy is not going to remain an economic superpower for long.
To formulate policy, we need data. We do not know why these workers are not working. We need to find out who these workers are. What jobs did they hold? Why are they no longer working? What skills do they have? What skills do they need? Are there similar workers in similar circumstance that have managed to stay working? Why did one group perform well and another poorly?
Once we have the answers to these questions then, and only then, we can begin to formulate a policy response. With funding the BLS and the Census Department could answer these questions in a few months. There is no point in throwing money desperately at job creation programs until we understand the source of the jobs problem.
Saturday, November 7, 2009
The Employment Situation: Bad News for a Recovery
Although the labor market has improved substantially since early this year, over the past three months, job losses stabilized around 200,000. We have never had an economic recovery with job losses at this level. I find it beyond belief that the economy is in the midst of a recovery with these losses.
Most forecasters a jobless recovery has already begun. But a jobless recovery is characterized by a weak but stable labor market, a market where losses have ended but gains have yet to occur. An economy can, apparently, muddle along without job growth; it cannot grow with large job losses.
So, even at this level of losses, an economic recovery is not in the cards. But, more worrisome, other indicators of the labor market are much weaker than the establishment survey and some of these indicators point to accelerating losses.
Initial Claims Remain Weak
I wrote almost a year ago on the strong long-term link between initial claims per month and job losses per month. At the time, claims were accelerating sharply and were pointing to unheard of job losses. Initial claims have, of course, improved. But they have not fallen quickly or robustly. Initial claims seem persistently stuck above 500,000 per week.
These initial claims are consistent with job losses between 300,000 and 450,000 per month.
The Household Survey is a Disaster
Once again, job losses as measured by the household survey are outpacing job losses from the establishment survey by a substantial number. From February to June, the household survey and the establishment survey were, on a twelve-month change basis, showing essentially the same job losses. However, since mid-summer, the household survey has pulled ahead by 878,000 about 292,000 extra losses a month.
As I wrote (here), the month-to-month changes in household survey employment are not a reliable indicator of labor market conditions. The survey is subject to substantial sampling variation and month-to-month changes can be absurdly misleading. I have found, however, that whenever the household survey jumps ahead of the establishment survey, in either direction, it tends to predict both the direction of the labor market and the sign of future revisions to the establishment survey.
The household survey, consistent with the claims data, is pointing to a much weaker labor market than is the establishment survey.
A Recession Dummy in the BEDS Model?
I don’t know why the establishment survey is so much stronger than other labor-market indicators. But, I do have a suspicion. A long time ago, I wrote about the Birth and Deaths model used by the BLS to adjust the establishment data. Essentially, the BLS uses this model to control for the number of businesses being created and destroyed every month. I suspect, but do not know, that the BLS is uses a recession dummy in the model. The recession dummy, if it exists, is important and likely substantially improves the performance of the model.
I suspect the BLS turned off the recession dummy in the third quarter. Without the recession dummy in place, the model will, for any given read of the source data, produce fewer job losses. Remember, I do not know that they use one. But, if I were using a model to estimate losses, I would include one. So, I suspect that they have one.
Takeaways
We cannot have a recovery, jobless or otherwise, if the economy continues to shed jobs at a rate of 200,000+ per month. The best indicators do not at the moment point to any further near-term improvement in the labor market. Until we see some substantial improvement in the labor market (at least 0 losses), the economy cannot recover.
Saturday, October 31, 2009
Doing the Math: The Fiscal Multiplier Effect of the 2009 American Recovery and Reinvestment Act
Any regular reader of this blog knows my opinion of fiscal stimulus and fiscal multipliers. I have shown evidence (here and here) that fiscal multipliers must be below 1 and are likely closer to zero or even negative. Pushing against this belief is the recent performance of the economy. GDP grew by a very healthy 3.5 percent in the third quarter, boosted by gains ranging from private consumption, to residential investment, to direct government expenditures. According to the Vice President, the increase in GDP is entirely attributable to the stimulus efforts by the administration.
I am inclined to agree.
I believe that in the absence of government stimulus the U.S. economy would have continued to contract in the third quarter. What’s more, I say this without changing my views on the multiplier. How is that possible? Let’s do some math.
The following table shows the GDP growth that would have occurred in the absence of fiscal stimulus under different assumptions for what counts as government stimulus using a multiplier of 1 (my maximum) and a multiplier of 3 (Romer’s base case). Because there is considerable uncertainty over the timing of the stimulus, I show the four-quarter change in GDP through the third quarter. Over this time period, GDP fell by 2.3 percent. The numbers in the table show the four-quarter change without stimulus and should all be viewed relative to the 2.3 percent fall.
The first row of the table assumes that the sum total of fiscal stimulus is the pay out from the ARRA. According to data from Recovery.gov, as of October 29, the government had actually spent $173 billion (this number includes tax relief and spending). This is the most conservative estimate of stimulus spent to date. (Romer would include both actual spending and money allocated ($310 billion). I agree with her but want to use the smallest number to start. My numbers get bigger fast anyway.) Assuming a multiplier of 1 ($173 billion spent adds $173 billion to GDP), counterfactual GDP growth is -3.6 percent. With a multiplier of 3, the counterfactual falls to -6.2 percent.
I find both of these numbers credible.
I actually believe GDP would have fallen more than 6 percent in the absence of the programs. And, if I believed that total stimulus was $173 billion, I would also have to join Romer in the multiplier is greater than 3 world. Fortunately for me (I am not the introspective type), the total amount of stimulus is much greater than $173 billion.
Using this number and a multiplier of 1, yields a counterfactual GDP growth of -15.4 percent. With a multiplier of 3, the number falls to -41.6 percent. These numbers are beyond the pale. GDP never fell by more than 40 percent in the Great Depression. Using this number and my belief of a counterfactual fall in output of 10 percent, the current multiplier is negative.
But, there is also the issue of the Fed’s balance sheet. The Fed has pumped almost $1,000 billion into the economy over the same period. This is measured by the expansion of the Fed’s balance sheet. There is no difference between receiving a tax break for $1 trillion dollars and receiving a $1 trillion dollars in cash from the Fed. We can argue about effectiveness but that is the exercise here.
Adding the Fed’s balance sheet expansion to the calculation, yields a counterfactual GDP growth of -22.4 percent. With a multiplier of 3, we have the absurd number of -62.5 percent. I believe that latter number would be a modern-era record. I suspect we would have to go back to the plague years in Europe to find an equivalent fall. I would love to see Summers or Romer stand up and make the case for this counterfactual.
So like many Banana Republics before us, we have managed to spend enough to turn GDP growth positive. But, the cost of achieving this number has been phenomenal. To achieve a paltry $212 billion increase in real GDP, we spent about $2 trillion dollars. This gives us a net return of 20 cents on the dollar.
Yes, GDP would have fallen without the spending. The probability that this recession would have scored as a depression in the absence of stimulus is high.
Was it worth it?
Housing Tax Credit: Did it boost the housing market?
Including both new and existing homes and valuing the sales at their median price, total housing sales increased by about $16 billion through September. It is, of course, very difficult to measure how much of the increase is attributable to a natural increase in demand and how much is attributable to an increase in demand derived from the tax credit alone. I estimate two distinct effects: the direct effect of the extra money pouring into the housing market and the indirect effect from the induced change in prices.
In normal times according to the National Association of Realtors, about forty percent of sales go to first time home buyers. The tax credit likely increased this number. I don’t know how much but I assume 50 percent is a conservative number. Under this assumption, total outlays under the tax credit were $14.1 billion through the end of September, slightly more than the CBO’s scoring of the program ($11 billion) and slightly less than NAR’s estimate ($15.2 billion).
I assume that the $14.1 billion has a direct one-for-one impact on the demand for housing. This impact is shown in the figure as the difference between the solid black and the dashed black lines. The estimated impact has grown through the year. The sharp rise between April and June reflects ordinary seasonal fluctuations in demand. These data are not seasonally adjusted.
In addition to the direct effect, the subsidy has an indirect effect through induced price changes. Given an estimated price elasticity of demand, the subsidy pushed average house prices up by about 4 percent (My estimate is just below that of Goldman Sachs who estimated a little more than 5 percent. They must have estimated a slightly higher demand elasticity.) This price effect induces sales of existing homes. Because of the higher price, existing home owners have an incentive to sell their house and either rent (less likely) or buy a new home (more likely). This impact is small relative to the direct effect and is shown as the difference between the dashed red and dashed black lines.
In total, I estimate that the existence of the subsidy has boosted the value of housing sales between January and September by about $17.3 billion.
There is no question that this tax break helped the housing market. An extra $17 billion likely kept some home builders in business and it must have paid the bills of quite a few real estate agents. Whether or not the program was worthwhile depends on the public policy decision of whether or not we want to subsidy the housing market. There are no macro side effects of the program as designed.
Whether or not the tax credit is extended (and it looks like this is a done deal), the housing market is likely to decline in the coming months. Because the program was expected to expire this month, most households eligible for the program and able to buy a house have already taken advantage of the tax credit—see the 9 percent surge in existing home sales in September. The majority of these households would have bought a house sometime in the next year without the subsidy and the tax credit simply changed the timing of their decision. With the extension, I expect the value of sales to fall by about $9 billion over the next several months and to fall by the remainder as the credit is phased out.
Of course congress looking ahead to midterm elections hopes that underlying demand for housing will surge by the time the credit expires, disguising the tax credit induced slump. It is possible but we are going to have to see a more robust labor-market recovery before this can happen.
Thursday, October 15, 2009
Estimated Taylor Rules: A Good Fit for Monetary Policy?
In fact, the fit of these lines is so good that I have become a bit suspicious over the exercise. I decided to do two things: 1) I would extend the forecast back to the early 1950s and 2) I would drop any measure of economic slack.
The reason for the first is obvious: The Fed seems to have made systematic policy mistakes during the 1970s after performing admirably in the 1960s. Then, if the Taylor rule is to be a decent guide as to the appropriateness of policy, it had better deviate systematically in the 70s.
The reason for the second is both econometric and philosophical. If the lagged policy rate responds to economic slack and slack evolves on slowly, the system is over-identified: we are more or less estimating an identity. Philosophically, economic slack is impossible to measure. We have no idea how to measure it or how this measure would relate to inflation. (I know: there are a thousand ways to measure slack. I am just saying none of them are meaningful.)
The result of the exercise is shown below. I too was stunned.
The estimated Fed funds rate lies almost exactly on top of the effective funds rate, shown as a dashed line. Either this is a meaningless measure of policy or monetary policy has been equally appropriate from 1956 through 2009. I choose the former as the more reasonable explanation.
I am not saying monetary policy has been bad of late. I am saying that using a Taylor rule to evaluate policy is meaningless. (Not the theory behind the Taylor rule; just the practice of estimating such a rule.)
What gives?: The Taylor Rule is an Identity
The Taylor rule is really something of an economic identity rather than a model of economic decision making. In effect, estimating a Taylor rule is no different than estimating the national income identity and finding out that Y really is equal to the sum of its parts.
To see this, use the old quantity theory equation:
P = v(φ)*M/Y
Prices are a function of Money (M), the quantity of goods in the economy (Y), and perhaps some measure of the current state of the economy (v(φ)), where the variable φ simply stands in for the current state of everything not written explicitly.
This equation is old fashioned but is fundamentally an identity: more money chasing fewer goods leads to higher prices.
We can transform the identity by taking logs and then first difference the equation.
dln(P) = dln(v(φ)) + dln(M) – dln(Y)
Then rearrange the equation to put money on the left hand side:
dln(M) = dln(P) + dln(Y) – dln(v(φ))
This is the Taylor rule.
Wait! We started with an identity (of sorts) and we ended with a Taylor rule. Logic then insists that the Taylor rule is an identity.
Okay, I hear you. The Fed does not target the money supply; the Fed targets a nominal short-term interest rate and the Taylor rule uses a short-term interest rate not the supply of money.
To see the link, how does the Fed enforce its nominal target. The Fed buys or sells bonds until short bonds are trading at the desired price. What does it use to buy and sell bonds? Money, of course.
So, the money supply is determined by the desired interest rate. The Fed, in effect, adjusts the money supply such that the short-term bond market is in equilibrium at the desired policy rate. M = f(r) and r = G(M). Where G is the inverse function of f.
Then, we have finally derived the Taylor rule.
dln(f(r)) = dln(P) + dln(Y) – dln(v(φ))
Finally, it is a small step to transform the above equation to the more familiar:
rt = αrt-1 + ϐ1Inft + ϐ2Growth Rate of Outputt – ϐ3Potential Somethingt
The coefficients in the equation can be estimated to minimize the information loss in the last transformation. The final term simply reinterprets our state variable. Recall, v(φ) was just some term reflecting the economic environment. In my specification, I set this coefficient to zero. In a standard specification, the restriction sets this to some measure of potential output. We don’t have to argue over signs or magnitudes because we may freely estimate the coefficients.
The Taylor rule fits because it is an identity.
The Quantity Theory
For those of you who don’t believe in money or at least who don’t believe in the link between inflation and money. Take a look at the following picture. The graph shows the five-year change in M2 divided by output against the five year cumulative change in the CPI. If M/Y rises rapidly, so do prices.
NorthGG asked if the relationship is between core or headline. I would say: yes. The five-year changes should completely eliminate the influence of high-frequency changes in food or energy. Only prolonged increases in either food or energy prices will show through. And, prolonged increases are also known as inflation.
Monday, October 12, 2009
Is High Inflation Likely?
In his blog, Krugman dismisses the possibility of inflation. He goes farther and calls the Fed irresponsible for even considering the possibility of inflation. Krugman’s analysis is completely misleading over the prospects of inflation. He is using a backwards looking indicator that ignores changes in current policy. I too believe that high inflation outcomes are likely avoidable. Krugman seems to believe high inflation will be avoided even if the Fed leaves rates at zero forever. I believe that positive action (tighter policy) on the part of the Fed will be necessary to avoid inflation.
The Method
To see the difference in our beliefs, let’s work within Krugman’s framework. Krugman proposes the Taylor rule as his model of monetary policy and uses the parameters estimated by Glen Rudebusch at the SF Fed to judge the current, appropriate policy stance. Krugman uses the following Taylor rule:
Target fed funds rate = 2.07 + 1.28 x inflation - 1.95 x excess unemployment
Rudebusch’s weight on the unemployment gap is much higher than most other estimates. Taylor called for a coefficient of 0.5 on excess employment relative to a coefficient of 1.0 on inflation, implying the Fed should care twice as much about inflation as unemployment. Krugman believes the Fed should only care about 2/3 as much about inflation as the unemployment gap. The change in weights is quite significant.
In Krugman’s specification, with current inflation at -.02 percent (core PCE, 4-quarter change), the first two terms imply a Fed funds rate of positive 2.1 percent. The unemployment gap is then driving the current negative policy rate. With current unemployment around 9.8 percent (it was only 9.3 percent in Q2) and using the CBO’s pre-recession estimates of the NAIRU, the implied policy rate is very, very negative; indeed, much more negative than Krugman reports, negative 7.7 percent. If we had used more standard weights and a constant of 2, the implied policy rate is just barely negative, -0.5 percent.
But, to be honest, I am with Krugman and don’t see any justification for Taylor’s weights. The Taylor rule is not a model of the policy rate but was rather designed as a descriptive rule for understanding policy setting. Given this view, we are free to estimate rates and if I estimate the Taylor rule I arrive at coefficients closer to Glen’s than to Taylor’s.
The Mistake
If you think the Taylor rule was a good guide to policy in the past, the Fed shouldn’t start to raise rates until the rule starts, you know, yielding a positive number.The first part of the quote is wrong and the second part puzzling.
The Taylor rule was not a good guide to policy in the past: The Taylor rule had been a good description of past policy. The two statements are not equivalent.
The Taylor rule is essentially a linearized equation from a specific model of Fed policy and inflation and resource slack. The Taylor rule does not describe a fundamental relationship between policy, output and inflation. There are many models of output and inflation.
In particular, we can modify Krugman’s Taylor rule to make it forward looking. An optimally behaving Fed will set policy not based on the past behavior of variables but rather on their forward-looking expectations.
Money Matters
For some reason many Fed officials seem to view it as inherently unsound to stay at a zero rate for several years running — but I’m at a loss to understand what model, or even conceptual framework, leads them to that conclusion.Maybe some of the Fed officials remember the adage ardently espoused by Friedman: Inflation is always and everywhere a monetary phenomena.
Krugman, along with the rest of the Fresh Water economists, seem to have completely forgotten about the link between money and inflation. Lucas, Freidman, Smith, Hume, and yes even Keynes believed in a link between the quantity of money and inflation. Lots of money: lots of inflation.
Every scholar who has ever seriously examined the relationship between inflation and money has found a positive relationship. Lucas found a positive relationship using both U.S. and international data. Friedman found the same in 1960s for data sets running from the mid 1880s to the early 1960s. Both Hume and Smith believed in a positive relationship between money and inflation, although there use of data was more anecdotal and conjectural than rigorous.
The most recent study of inflation and money, of which I am aware, was presented last Thursday at a Federal Reserve conference. “Money and Inflation,” by Bennet McCallum and Edward Nelson, finds a consistent positive relationship between money supply and inflation: money raises inflation about 1-for-1 with a two year lag.
The Fed has increased the money supply substantially over the last two years. According to the Federal Reserve’s H.6 release, M1 has increased 18.6 percent over the past twelve months, while the broader M2 has risen 7.8 percent. These are extremely high growth rates. According to the work of McCallum and Nelson, this growth rate will lead to inflation one to two years from now.
If we replace Krugman’s backward-looking PCE with a reasonable forward looking expectation of inflation driven by the increase in the money supply, we find that between one and two years from now the policy rate had better be above 2 percent. Further, if we believe the results of McCallum and Nelson, the policy rate probably needs to start increasing now. That is, the money supply has to be reduced now to avoid the high inflation in the future.
This is the model policy makers likely have in mind when they call for tighter policy.
I don’t think there is any particular rush to raise rates. I think the Fed can afford to be patient and watch the data.
Conclusion
Krugman and I agree: There is very little likelihood of high inflation. Krugman believes in the Taylor rule and so a passive Fed can achieve this outcome. I believe in money and so an active Fed will achieve this outcome.
Krugman’s own Taylor series framework implies high inflation within one to two years, if the Fed is passive. Fortunately, the Fed is not passive. The Fed has the power to control inflation. It simply has to withdraw the liquidity in a timely manner.
Saturday, October 10, 2009
Gold Bugs, Exchange Rates, and Monetary Policy
Using monetary policy to control the value of the dollar would be a policy mistake.
The Fed is already trying to do too much with a single policy tool. Adding yet another criterion to their already long list is too likely to lead to policy mistakes. The Fed needs to keep its eyes on the balls already in the air and not add balls to impress like a foolish street juggler.
But, at the same time, the Fed should not disregard the exchange rate as a signal of overly loose policy. The exchange rate is perhaps the best, broadest, and most flexible dollar denominated price. A depreciation of the dollar is inflation—the dollar price of foreign goods goes up. It reflects the average relative price of dollar goods. Amongst their other price signals, the Fed should monitor the exchange rate. The value of this mechanism as a price signal ultimately lies behind the current dispute amongst various members of the FOMC, no different than the debate 4 years ago on the value of the Cleveland Fed’s median inflation rate.
The danger of the Fed ignoring these signals is exceptionally high at the moment. There is a group of economists (Roubini talks about this all the time.) that have been looking for a decline in the real value of the dollar for a long time. They view the real value of the dollar as an equilibrating mechanism to adjust the pattern of global demand. If the Fed shares these beliefs, they may confuse a nominal movement in the dollar with the long-looked-for real depreciation.
Remember, any price has two components, real and nominal. The real component reflects an equilibrium between supply and demand (of and for the good). The nominal value reflects and equilibrium between the good and money. The Fed controls the latter (to an extent) and never the former.
To tie it all together, the Fed should be careful not to confuse the real and nominal value of the dollar. From their perspective, unexplained movements in the dollar are probably nominal.
Watch the value of the dollar but don’t target it.
Wednesday, September 30, 2009
The True Impact of Fiscal Stimulus
“Masked the underlying weakness – Isn’t that exactly the definition of fiscal stimulus?” you ask.
The answer is not necessarily. To explain my views here, let’s turn to the case of China, the current poster child for successful fiscal and monetary stimulus and a crisis already occurring. In the second quarter, GDP increased at a rate someplace in the high teens. (China only issues real GDP on a four-quarter change basis; so, quarterly changes must be inferred.) The growth rate was phenomenal and is directly attributable to government intervention.
How did China achieve this remarkable growth? Easy. I believe that the government simply insisted that factories continue producing output—I am sure the insistence was accompanied by a promise of a fiscal transfer. Factories keep producing output, calamity averted, the stimulus is effective.
In fact, GDP gets an additional boost. The factories are producing output that nobody is buying – exports remained depressed and consumption is not picking up all of the slack. The output from the factories is accumulated as inventories, a positive contribution to GDP. But, since nobody is buying the price of the inventories also falls. Real GDP gets an arbitrarily large boost as the price deflator declines. [I am exaggerating for hyperbole. I know some of the goods were purchased but the increase in output likely owes to exactly the channels I am suggesting. If you want to see this at work in the United States, go back to the fourth quarter of 2006. Autos made a large positive contribution to GDP as the prices of new cars fell sharply and the real value of car inventories was pushed up.]
This is not real growth. It is not real growth in the present and it is a direct drag on growth in the future. Barring a sudden increase in OECD demand, China is in for some hard times.
This is the nature of fiscal stimulus.
The Fiscal Cost of Stimulus
I don’t believe it for a minute. But, Krugman’s views are amazingly internally consistent and there is no questioning his mental acuity.
He believes in large multipliers. That is, every dollar increase in government spending increases output by something much larger than a dollar. (He has publically averred to multipliers of around 1.3 but this would not be even in the ball park of self-financing. Spend $1 billion, get $1.3 billion. Spend $1 billion raise $100,000 in extra tax revenue. With our tax system, self-financing begins with multipliers greater than 3.)
If Krugman is correct on the multipliers then he is correct on the cost of the fiscal stimulus. If not, …
Tuesday, September 29, 2009
The Federal Deficit and the Tax Burden: We can afford the debt not the spending.
I have always taken it for granted that the debt was sustainable. After all, why would the markets lend to the government if the debt were not sustainable. Recent events (and Krugman’s excellent ariticle in the NY Times on the state of macroeconomics—Summers’ ketchup economics is brilliant.) have shaken my outright confidence in markets. So, I thought I should do my own sustainability calculations. I came to the surprising conclusion that the stock of government debt is not only sustainable but it is downright affordable.
To be conservative in my estimates, I use the entire amount of Treasuries outstanding. Publically held debt is a better measure of government debt but either will do for my purposes—funds held by the Fed and other public agencies don’t really count. In my mind, counting them is the same as counting the total sum of Treasuries that could be issued.
To check for sustainability, I take the Federal Government in hand and put it on a debt payment plan. If the Feds were a household and we were financial managers we might choose a 10 to 30 year plan depending on their age and circumstance. But, governments are special. I chose to put the gov’t on a 100-year payment plan at the end of which time debt must be zero. Of course with the payment plan the government is forbidden to acquire new debt—this feature will be the lemon in the pudding.
If real interest rates stay at their current low levels of around 1 percent (unfathomable), the payment plan costs each of the 137 million workers in the United States $1,361 per year, a paltry 1.7 percent of personal income. Even if real interest rates rise as high as 10 percent (equally unfathomable (or maybe I can picture this one)), the debt burden per year remains an affordable $8,248 per year per worker or about 10 percent of personal income. The latter number is large but feasible.
The real problem is that the government must also raise sufficient funds to keep from borrowing more. The Bush administration ran the largest deficits of any government to that time. If we add the Bush deficits to the total, yearly payments need to rise. Under the low interest rate case, this raises the payments to a little more than $3,000 per year per worker. At 10 percent, we are quickly approaching $10,000 per worker per year.
The Obama deficits are projected to be much, much larger. If we have to raise revenue to offset the average Obama deficit over his first term as calculated by the CBO, even at 1 percent interest, the debt payment is $6,326 per worker. At 10 percent interest, payments per worker increases to almost 17 percent of personal income.
This exercise reveals the national debt to be affordable in the ordinary sense of the word. The United States could pay off its debt in a mere 100 years with only a modest strain on workers. However, if government spending continues to rise (or stays at the current levels), the strain on workers is likely to extreme. Additional taxes in excess of 10 percent of total personal income are likely to have a large negative impact on the workforce.
I might be in favor of higher government spending or I might be against it. As always, the decision is one of public policy and not of economics. But, good economics should inform the decisions and we should make an informed choice on how much to spend based on an honest national dialogue.
The Strength of the Recovery: Demographics could kill it all.
In my last two posts, I gave a cycle-by-cycle breakdown of the key parts of GDP. The rate of recovery in individual components was not particularly strong. Adding up the various measures, the strength of various recoveries was accomplished because most of the rising demand coming out of recessions was satisfied from domestic sources. In this recession, any rising demand is likely to be satisfied to a large extent by production outside of the United States. Just as the downturn in the United States was cushioned by a fall in imports the rebound will be pushed down by imports.
But, in my mind, this is all accounting. The potential U.S. recovery is exposed to a much more serious threat: the U.S. consumer. In percentage terms, the rise in the savings rate in this recession far exceeds the rise at any other time in post-war history. Indeed, we have to go back to the 1930s to see a similar spike. Prior to this recession but still post-war, the largest increase in the savings rate was a little more than 30 percent. In this episode, the increase has been a little more than 80 percent. Granted, the calculation is from a very low base near 2 percent of income.
However, never in post-war history has consumption accounted for such a large portion of aggregate demand. I believe the large change in savings reflects a combination of cyclical forces and the beginning of a long-term consumer retrenchment. Americans must consume a smaller portion of their income.
With 1 in 6 Americans underemployed and with wage growth falling, consumption is likely to be a substantial drag on any recovery.
Yet, I believe Mussa would retort (he makes the argument in his paper) that consumption has fallen more than in any other recovery and we are due a bounce back. Indeed, so Mussa would say, following the recession of 1980 consumer spending bounced back with a vengeance.
Take a look at the following picture. Despite the financial crisis aspect of this recession, consumer credit was unimpaired relative to previous recessions. Real consumer credit has only nudged down. Compare this to the almost 15 percent fall in 1982. In 1982, real interest rates rose to record levels. Consumers stopped spending. Following the recession, the expansion of credit played a huge role in the rapid expansion of spending. Consumer credit grew at almost three times the pace of the average recovery. With almost no impairment in this recession, the room for a bounce back is smaller.
Most model-based forecasts assume that the fall in population itself will subtract only a few tenths from growth. With labor shares and capital constant, this is true. However, the fall in prime-age workers is also likely to be associated with a long-term downward trend in investment. We have already almost a decade of subpar investment growth. More importantly, the loss of human capital will be severe. It takes years to produce prime-age workers. These workers embody skills that have been acquired in a life-time of work. There loss is likely to drag on labor productivity for the foreseeable future.
Wednesday, September 9, 2009
Part I: The Great Recession of 2009: Where are we going?
We examine, in turn, key elements of U.S. demand, running from investment to imports to exports to consumption. For each series, we plot data 12 quarters on either side of the end point of post-war U.S. recessions. The zero date is the last quarter of the NBER recession. The solid line is the average behavior of all post-war recessions excluding 2009. The dotted line shows the behavior of investment in the worst post-war recession defined as the recession with the largest peak-to-trough decline in the particular series being shown. All points on the dotted line belong to the same recession. The dashed line shows data for the current recession. We plot the data as if 2009Q2 is the end of the recession. All lines are indexed to 100 at last date of the recession. The indexing scales the graphs to show cumulative percent changes from the zero date. For example, a value of 120 in period -4 indicates a 16 percent fall in the series in the last year of the recession. While a value of 120 in period 4 indicates a 20 percent rise from the end date o the recession. All series are real.
We begin with machinery and equipment investment. I have long advocated that this recession is a manufacturing recession (see this post); therefore, in my view, the recession is unlikely to end without a recovery in this sector. Investment is one of the most volatile components of GDP, typically falling almost 10 percent over the last year of the recession. Post-recession, investment grows robustly recovering in level terms one year after the end of the recession.
The worst investment recession was 1958. Investment fell a little more than 18 percent with the decline occurring in a single year. Investment recovered in level terms just in time for the 1961 recession. Despite the faster-than-average growth upon exit, investment did not recover in level terms to its pre-recession peak until almost 2 years after the end of the recession.
The current recession has solidly replaced 1958 as the worst investment recession. Investment has, to date, fallen more than 20 percent from its peak 6 quarters ago. Investment does not currently show any signs of life; however, investment turns on a dime and typically turns up only once the recession is over. If history is a reliable guide, investment is likely to surge robustly in coming quarters. I believe a typical recovery is more likely than the sluggish recovery experienced after the 2001 recession. It seems a global manufacturing reshuffling is once again underfoot. Even though this reshuffling is likely to destroy capital (the shutting down of American car plants for example), it may also breed investment.
Like investment, trade has plummeted more in this recession that at any time in U.S. post-war history. Unlike investment, however, trade shows a clear, leading, end-of-recession pattern. The decline in imports tends to slow in the quarter or two before the actual end of the recession. Following the recession, imports tend to grow very rapidly, in part reflecting the strong trend in trade over the last 40 years. The worst import recession was in 1975.
In the current recession, imports have also surpassed all previous records, falling over 20 percent. As of the second quarter, imports still declined but the pace of decline was somewhat smaller than in previous quarters. This may signal the end of the recession or it may just be one of the inevitable bobbles in the data. In former case, the recession would end following the third quarter.