The graph of Year-on-Year growth of Nonfarm Business Sector: Real Output Per Hour of All Persons illustrates what he says.

Roach is not the only person to comment on this. Others include Gavyn Davies and Jon Hilsenrath.

But if you look at the right parameters there is no puzzle or paradox. The numbers are perfectly logical as we shall now see.

As economics textbooks never tire of telling you, a worker with a backhoe can dig more mud than a worker with a shovel. The backhoe of course costs much more than a shovel, which is to say that if you want productivity you need to make capital investments. And capital investment is exactly what has been lacking in the US economy since 2007, as the graph of real net private investment to real GDP shows.

If you consider an investment of 5% of GDP the minimum required to get the economy chugging properly, then the shortfall in investment since 2007 amounts to a total of 17.5%.

When unemployment is high and the labour participation rate low, low investment does not matter so much, because there is enough capital per worker to go around. Consider, for example, a factory with 10 employees and 10 CNC lathes. When the recession began the factory let one worker go. So it could cannibalise one lathe to provide spare parts for the other nine. Or it could use two lathes to provide spare parts for the other and run the eight lathes for nine shifts a week instead of eight.

But as workers are rehired this leeway vanishes. Meanwhile, the machines that have been flogged cry for maintenance or replacement. Naturally productivity per worker falls. The fall in productivity is actually a sign that the economy is returning to normal.

Roach quotes Robert Solow's comment in 1987 that "you can see the computer age everywhere except in the productivity statistics." He then goes on to add: "The productivity paradox seemed to be resolved in the 1990s, when America experienced a spectacular productivity renaissance. Average annual productivity growth in the country's nonfarm business sector accelerated to 2.5% from 1991 to 2007, from the 1.5% trend in the preceding 15 years. The benefits of the Internet Age had finally materialized. Concern about the paradox all but vanished."

But when you look at the graph of real net private investment to real GDP you realize that the high productivity simply reflected investment that ran above the normal rate for a long time.

]]>

Atif Mian and Amir Sufi in their book, *House of Debt*, ascribed the gap to households paying down debt contracted during the boom years. But as the graph below shows, the ratio of household debt service payments to disposable personal income is at its lowest level in 35 years. And still personal consumption expenditures have not recovered. So debt is clearly only part of the story.

My new ebook *Macroeconomics Redefined* has the full story.

P.S. The difference between the saving rates now and before the recession is about 3%. Household consumption accounts for about 70% of US GDP. Multiplying the two we get 2.1%, which is roughly the size of the gap between actual and potential GDP as per Congressional Budget Office estimates

]]>Now one of the theoretical arguments in favour of government spending is what is called the "Keynesian multiplier" which shows that a dollar spent by the government results in total spending which is several times higher.

Assume that the government hires unemployed resources to build a $1,000 woodshed. The carpenters and lumber producers get an extra $1,000 in income. If they all have a marginal propensity to consume (MPC) of 2/3, they will spend $666.67 on new consumption goods. The producers of these goods will now have extra incomes of $666.67. If their MPC is also 2/3 they in turn will spend $444.44. The process will go on with each round of spending being 2/3 of the previous round. Thus a chain of secondary consumption spending is set in motion. But although it is an endless chain, the spending adds up to a finite sum. Mathematically, it is equal to 1/(1-MPC) or 3. Thus the $1,000 results in spending of $3,000.

In the US the saving rate is about 5%, so the MPC by this argument ought to be 20. In reality some of the money spent by the government will later be spent on imported goods and some will be taken back by the government in taxes. Even then the multiplier should be about 15. However, even the most optimistic calculated values of the multiplier are not usually more than 1.5. Why should this be so? Why does government spending have so little effect on the GDP?

The reason is that there is a fundamental flaw in the Keynesian multiplier argument. It confuses the average propensity to consume with the marginal propensity to consume.

In the figure below which shows the saving rate from 1990 to 2015 it will be seen that during each of the recessions the saving rate goes up and stays elevated for several years thereafter. During a recession, by definition, income is falling. A higher saving rate during a recession means that consumption is falling even faster than income. Since both are negative but the fall in consumption is greater than the fall in income, the marginal propensity to consume is greater than 1. Therefore the Keynesian multiplier which is 1/(1-MPC) is negative.

In physical terms this means that when the government spends an additional $1,000 consumers save more than $1,000. After the recession the multiplier moves closer to zero and then into positive territory. But this probably takes several years by which time the justification for government spending is over.

The above is adapted from my new ebook Macroeconomics Redefined

]]>Since the last time we put up this graph, the US economy seems to have deteriorated somewhat. Could this be related to the monetary contraction? Probably, but we do not anticipate anything catastrophic for about a year. A Fed rate hike could of course change matters.

]]>

The YoY growth rate of Corrected Money Supply (CMS) has fallen from 20.2% in February 2014 to 8.4% in December 2014. By the end of 2015 the growth rate will be firmly in negative territory if it continues to fall at the current rate. And that is when nasty things can be expected to happen. The Fed could of course accelerate matters if it decides to raise interest rates.

Overall the economy is following roughly the same trajectory as before the 2008 crash. This time I suspect the end will be much nastier.

]]>

This graph uses the savings figures revised by the BEA in July 2013. It also assumes that sweeps have remained constant since May 2012 when the Fed discontinued publication of sweeps data.

On October 29 the Fed is expected to announce that it will end its bond-buying programme or QE.

The last monetary contraction which started around January 2006 first had its effect on housing starts, then on housing prices, then on bank fortunes, eventually leading to a crisis in the payments system, and finally decimating all asset markets.

]]>In primary school students are posed with problems of the following kind. If two apples cost Rs 20, how much would four apples cost? And they are taught that it can be solved in this fashion.

First you arrange the data as below:

2 20

4 ?

Then, if you cross multiply you find that ? (or what is in later years called the unknown variable) is equal to 40.

In primary school this seems to be a universal truth. But in secondary school the student comes in for a rude shock. He finds that there are problems that cannot be solved in this manner and in fact yield the wrong answer if he attempts to solve them this way.

Thus, if 2 workers take 20 hours to build a wall, then it is incorrect to conclude that 4 workers will take 40 hours to build the same wall. Before one can solve such a problem, one needs to decide in advance whether it is one of direct or inverse proportion.

A similar argument applies to the equation 1 + 1 = 2. Imagine that I eat one apple between 8 am and 9 am and one apple between 9 am and 10 am. How many apples do I eat between 8 am and 10 am. An operational way of arriving at an answer would be to place 1 on the number line for the first apple and then move 1 unit to the right for the second apple to find that between 8 am and 10 am I have eaten 2 apples.

Imagine, now, that I travel at a speed of 20 km/hr between 8 am and 9 am and at a speed of 40 km/hr between 9 am and 10 am. I cannot use the same operational method to conclude that between 8 am and 10 am I have travelled at a speed of 60 km/hr. Distance objects are amenable to the rule that 1 + 1 = 2 but velocity objects are not. We instinctively apply the 1 + 1 = 2 rule to apples and distances but not to velocities and think no more of it.

At the start of the 20th century A.N. Whitehead and Bertrand Russell set out to prove, among other things, that 1 = 1 = 2, from
even more fundamental notions. The result was *Principia Mathematica* which was published in three volumes in 1910, 1912 and
1913. Ludwig Wittgenstein raised an immediate objection that the book used a circular argument in that by using sets it was using
objects that were known to obey the 1 + 1 = 2 rule. But the prestige of Whitehead and Russell was such that the book was still
acclaimed widely as having achieved a breakthrough.

Some years earlier Gottlob Frege had travelled a similar path. Just when his magnum opus *The Fundamental Laws of Arithmetic*
was dusted and ready for publication he received a letter from Russell asking whether the set of all sets which are not members of
themselves was a member of itself. Frege was gracious enough to conclude his second volume with this acknowledgement: "A scientist
can hardly encounter anything more undesirable than to have the foundation collapse just as the work is finished. I was put in this
position by a letter from Mr Bertrand Russell when the work was almost through the press."

Russell was to suffer a similar fate himself when Kurt Godel in 1931 published a paper titled "*On Formally Undecidable
Propositions in* Principia Mathematica *and Related Systems*". Wikipedia paraphrases the paper thus: "Any effectively
generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any
consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement
that is true, but not provable in the theory." More simply, you cannot prove that 1 + 1 = 2.

But formal proofs apart, there is another reason for doubting the universal truth of 1 + 1 = 2. Primary school mathematics is divided into two compartments: arithmetic and geometry. For centuries it was believed that the only geometry possible was Euclidean geometry in which parallel lines meet at infinity and the sum of the measures of the three angles of a triangle is 180 degrees. At the beginnng of the 19th century Gauss discovered the principles of non-Euclidean geometry but the idea was so outlandish that he did not publish. It was left to the Hungarian mathematician Janos Bolyai and the Russian mathematician Nikolai Ivanovich Lobachevsky to lay out the basic principles around 1830.

Even so it was not thought to be anything more than an academic exercise. Only after Einstein published the General Theory of Relativity in 1916 was it realised that the world was basically non-Euclidean and that space was not flat but curved and that the curvature varied from point to point.

If school geometry was only one of many possibilities then from symmetry considerations one should conclude that the school arithmetic in which 1 + 1 = 2 is also only one of many possible arithmetics. In fairness to Whitehead and Russell it must be pointed out that their work was published before Einstein's General Theory.

Thus one is led to the conclusion that if one cannot prove that 1 + 1 = 2 it is because 1 + 1 is not equal to 2. In deciding to apply the rule to distances but not velocities we demonstrate that we know this. But because so many of the objects of our everyday experience do follow the 1 + 1 = 2 rule we conclude that it is universally true. But this is no more than a reflection of the fact that as human beings we tend to reason inductively, accept those facts which fit our beliefs and thus reinforce them, and reject those facts which do not fit our beliefs.

What we can say at best therefore is that 1 + 1 is sometimes equal to 2 though perhaps we can even go so far as to say that 1 + 1 is often equal to 2.

]]>
Read the entire paper: The mathematical equivalence of Keynesianism and monetarism]]>
http://www.philipji.com/item/2014-05-13/the-mathematical-equivalence-of-Keynesianism-and-monetarism

The thought occurred on reading about (note, not reading) Thomas Piketty's *Capital in the 21st century* that seems set to be the best-selling economics text of recent times. Piketty, who has in some quarters been hailed as a latter-day Marx, notes that income inequality in the US remained stable from 1910 to 1920, rose from 1920 to 1929, fell steeply after the Great Crash of October 1929 until the end of the war, remained stable until around 1980, and then rose steadily again, until in 2007 it rose above the level of 1928. A graph can be seen on Piketty's web site. To set right what he sees as a dire situation, possibly to prevent a capture of western governments by its poverty-stricken masses, Piketty suggests a general wealth tax and a top income tax rate of 80%.

Piketty seems to think that greater equality is something much to be desired. To test this I searched on the net for inequality measures for the Soviet Union to compare with the US. And I found some interesting figures in a paper Income Distribution in the USSR in the 1980s by Michael V. Alexeev and Clifford G. Gaddy. For the US some comparable figures can be found on the Federal Reserve Bank of St Louis web site.

The table below compares the two:

Year | USSR | USA |
---|---|---|

1980 | 0.290 | 0.403 |

1985 | 0.284 | 0.419 |

1988 | 0.290 | 0.426 |

1989 | 0.275 | 0.431 |

1990 | 0.281 | 0.428 |

Readers may recall that the only revolution that happened was in the USSR, not in the US.

Poring over English factory inspector reports in the sixties and seventies of the 19th century Marx reached the conclusion that the overthrow of capitalism was imminent. If nothing else, Marx's prognostications should serve as a warning that one must not use short-term data to jump to eternal conclusions. In the graph the current trend of rising inequality dates from around 1980. Is there any other variable that could explain this as well as the shifts in inequality mentioned earlier: stable from 1910 to 1920, a rise from 1920 to 1929, a fall thereafter until 1945, stable until 1980, and a rise thereafter?

It is illuminating to look at the following graph of US long term interest rates.

It is taken from The real rate of interest from 1800-1990: A study of the US and UK by Jeremy J. Siegel. The graph of inequality on Piketty's site and the interest rate graph here follow a similar trajectory. The period from 1910 to 1920 is a period of rising rates and stable inequality. Thereafter the interest rate falls and inequality grows. Similarly the period from 1980 is a period of rising inequality, and interest rates begin to fall from around that date. The Depression years were an exception. So were the war years but then that was a period of wage and price controls.

One cannot help but feel that low interest rates help push up asset prices and thus boost those who earn a substantial part of their income from financial assets. Now it so happens that the people who complain about rising inequality, Paul Krugman to take one example, are also the ones clamouring loudest for keeping interest rates low. Talk about the law of unintended consequences.

]]>
The unpredictability of money velocity was a key factor in hastening the demise of monetarism in the eighties. *Economics*, the textbook by Paul Samuelson and William Nordhaus, said in the 2005 edition: "As the velocity of money became increasingly unstable, the Federal Reserve gradually stopped using it as a guide for monetary policy... Indeed, in 1999, the minutes of the Federal Open Market Committee contain not a single mention of the term 'velocity' to describe the state of the economy or to explain the reasons for the committee's short-run policy actions."

Economists since Milton Friedman have sought to relate velocity to a number of factors and have been unsuccessful. The graph below will therefore come as an utter surprise to many. It plots the velocity of money against Moody's Seasoned AAA Corporate Bond Yield. For the entire period of five decades, 1960 to 2011 it shows that velocity runs a course exactly parallel to corporate bond yield. No one can be more surprised than I am. The velocity of money graph occurs in my book "The General Theory of Money" published in May 2012 but until now I had never thought of connecting it with interest rates if only because all the papers I had read never spoke of any relation between the two.

The monetary aggregate used in the graph is what I have called Corrected Money Supply in my book. A brief explanation is in order. Assume that you receive a salary of $1000 at the start of every month into your demand deposit. During the course of the month you spend 95% of this and save 5%. So the demand deposit would start at $1000 and run down to $50. However, you choose to keep a little extra in your demand deposit to allow for exigencies, say $500, roughly the value of your demand deposit at the middle of the month. If you add all the funds in all demand deposits in the economy there would thus be a certain amount of money that is never spent. Corrected Money Supply is M1 reduced by this money that is not a medium of exchange but is held purely with a precautionary motive.

One economist who came close to identifying the relation between money velocity and interest rates was John Tatom. In a 1983 paper called "Was the 1982 velocity decline unusual?" published by the Federal Reserve Bank of St Louis he observed that during numerous recessions after 1947 the velocity of money fell. He was, however, puzzled by the fact that in the 1970 and 1973-75 recessions the velocity rose. After some analysis he concluded: "Explanations that focus on declining interest rates also do not match up well with the recent pattern of velocity declines. In the first quarter of 1982, corporate Aaa bond yields averaged 15.01 percent and had risen from 14,62 percent one quarter earlier or 14.92 percent two quarters earlier. During the remaining quarters of 1982, the bond yield declined to 14.51 percent, 13.75 percent and 11.88 percent.9 The pattern in the second half of 1982 is consistent with a decline in velocity. What remains unexplained, however, is the largest decline in velocity, which occurred in the first quarter."

If Tatom had had the right monetary aggregate he would have reached different conclusions. He would also have realised that if money velocity falls during most recessions it is because usually interest rates fall during recessions.

The mechanism relating a rise in interest rate to a rise in velocity (or as it has been sometimes called, the case of the missing money), is quite simple. But for economists who have been brought up to view money in a particular way it is very difficult to grasp.

]]>Many of the revisions were marginal. For example, the average annual GDP growth rate for 1929-2012 was 3.3 per cent, which was just 0.1 percentage point higher than in previous published estimates. Similarly, the average annual increase in the price index for gross domestic purchases for the period 1929-2012 was lowered from 3 per cent to 2.9 percent.

However, the estimates for personal income, disposable personal income and personal saving have undergone huge revisions. These revisions are mainly the result of using an "accrual approach for measuring defined benefit pension plans".

Under the new system, the sum of employers' actual and imputed contributions is the accrual-basis measure of the compensation income that employees receive from their participation in defined benefit pension plans. According to the BEA, "accrual accounting is preferred over cash accounting for compiling national accounts because it aligns production with the incomes earned from that production and records both in the same period; cash accounting, on the other hand, reflects incomes when paid, regardless of when they were earned". If you don't understand this, don't worry.

The Figure below shows the estimates for personal saving in June 2013 and then again in December 2013 for the period from 2001 to 2013. The new estimates are higher than the older estimates for 2001 to 2007, lower for 2008, and higher again from 2009 onwards. For October 2001, the new estimate is nearly 200% higher than the old one. For November 2001 it is nearly 100% higher. For April 2005 it is again nearly 100% higher.

When a mere accounting change results in such gargantuan revisions it is of course necessary to take a closer look.

On juxtaposing the old and new estimates for personal saving with the S&P 500 for the same period, as in the figure below, some patterns emerge. It is only during 2008 that the old estimates are higher than the new estimates. This also happens to be a period when the S&P 500 was falling. What this suggests is that much of the change is the result of the stocks being held by pension funds. It is true that in 2001 and 2002 when the S&P 500 was falling the new estimates were higher than the old ones, but again this can be explained by the fact that pension funds also held mortgage and mortgage-related bonds that were rising during the period.

During periods when the S&P 500 or the real estate market was rising rapidly employers needed to make little or no contribution to defined benefit pension funds. The rising value of the funds' assets accounted for the employers's "imputed contribution". The BEA's error is of course in adding the rise in value of DB pension fund assets to employees' income, disposable income, and personal saving, although the increase in no way adds to employees' current income and of course it adds nothing to employees' disposable income because they never get to lay their hands on it. What the BEA likes to call higher personal income is contributed neither by production during the period nor by employers but is simply a reflection of higher markets. After 2008 one could say without much exaggeration that it is a direct result of QE.

The BEA justifies its new definition of personal income by saying it is consistent with business accounting. But a commonplace of business accounting is that the difference between the recorded value of available-for-sale securities and their fair market value is added to the equity section (or comprehensive income) of the balance sheet, and not to the current year's net income. The latter is of course what the BEA's change in accounting policy amounts to.

Whatever the faults of the old cash-accounting method it related sensibly to reality unlike the new accrual method.

]]>On a lark I have drawn graphs showing the relation between Corrected Money Supply (for the benefit of those who haven't read my book it is Seasonally Adjusted M1 plus Sweeps minus Seasonally Adjusted Personal Savings) and the S&P 500 for 1961-70, 1971-80, 1981-90, 1991-2000 and 2001-August 2013 (the latest month for which data are available). The amazing correlation surprised even me. There are periods e.g. 2001 and 2002 when Corrected Money Supply rises but the S&P falls. But as the last graph shows that was the period during which the Case Shiller 20 City Home Price Index kept rising. Conversely, this index fell from 2010 to 2012 when the S&P kept rising in tune with increasing money.

Readers may notice that the Corrected Money Supply graphs differ from those in the past. That is because the Fed has reworked a lot of data series right back to 1929.

]]>

Mauni Baba does not seem to have done too badly after all.

]]>