Since its creation, the Federal Reserve has been the subject of controversies about the degree of its policy transparency. Some argue that Fed transparency is inadequate, while others insist it is excessive.
The selection of a new Fed Chairman to take over for Ben Bernanke is unlikely to end this argument. Indeed, given the growing complexity of financial instruments and their impact on markets today, calls to pull back the curtain on Fed decision making are likely not only to continue, but increase.
One of the main arguments made by those who support greater transparency is that it would improve the quality of data. There is growing evidence that defective Federal Reserve data played a role in producing the misperceptions of systemic risk that led up to the recent recession. Banks and Wall Street firms believed that increased investment risk was prudent, as a result of the widely held view that systemic risk had decreased permanently. Even Nobel Laureate Robert Lucas wrote in his 2003 Presidential Address to the American Economic Association that the Federal Reserve had gotten so good at its job that macroeconomists should cease research on countercyclical policy.
There is growing evidence that defective Federal Reserve data played a role in producing the misperceptions of systemic risk that led up to the recent recession.
But better Federal Reserve data could have revealed that Fed policy had not greatly improved, and that — to cite but two examples — the widespread confidence in the “Greenspan Put” approach to monetary policy and the presumed permanent end of the business cycle were misguided. I argued as much in my book, Getting It Wrong: How Faulty Monetary Statistics Undermine the Fed, which won the American Publishers Award for Professional and Scholarly Excellence for the best book published in economics during 2012.
When Congress passed legislation in 1978 mandating audits for most government agencies by the General Accounting Office (GAO), it excluded the Federal Reserve System from this requirement. The following year, then-Chairman Paul Volcker made major policy changes to lower the inflation rate. Chairman Bernanke has stated that the 1978 audit exclusions were necessary to allow Chairman Volcker’s ability to act decisively. I was on the staff of the Federal Reserve Board in Washington, DC at that time. Paul Volcker was a determined chairman, whose actions were based upon his own strong convictions. The GAO could not have prevented him from implementing his chosen policy, as it has no policy-making authority.
The biggest danger of increased Congressional audit authority would be the second-guessing of unpopular policy actions for political reasons. There are well-known examples of such pressures. Over lunch with Arthur Burns, following his term as Federal Reserve Chairman (1970-78), I asked him whether any of his decisions had ever been influenced by Congressional pressure. He emphatically said no — not ever. But as Milton Friedman stated in the book I wrote with Paul Samuelson, Inside the Economist’s Mind, Nixon himself believed he had influenced Burns.
It is also worth noting that there are several instances when faulty monetary data led policymakers astray. For example, my research suggests that Volcker’s disinflationary policy was overdone and produced an unnecessarily severe recession. Poor data on the monetary aggregates, having improperly weighted components, led Volcker inadvertently to decrease monetary growth to a rate which, if appropriately measured, was half what he thought it was. In addition, during the decade leading up to the recent financial crisis, my data show that the Fed was feeding the bubbles far more aggressively than reflected in the Federal Reserve’s official data. The pattern of such misperceptions associated with defective data is documented extensively in my book, Getting It Wrong.
Focus, for a moment, on the Federal Reserve’s monetary published data. Is its quality the best possible and in accordance with best practice economic index-number theory? Unfortunately, it is not. Consider, for example, the widely monitored data on banks’ “non-borrowed reserves.” Clearly the borrowed portion of reserves cannot exceed total reserves, so non-borrowed reserves cannot be negative. Yet recent Federal-Reserve-reported values of non-borrowed reserves were minus-$50 billion! How can this happen? In its definitions, the Federal Reserve chose to omit from “total reserves” large amounts of funds borrowed from the Fed but included in published figures for borrowed reserves. It is unlikely that such confusing accounting practices would survive scrutiny by an outside audit.
There are other serious defects in Fed data. According to Section 2a of the Federal Reserve Act, the Fed is mandated to “maintain long run growth of the monetary and credit aggregates commensurate with the economy’s long run potential….” Neglecting these instructions, Federal Reserve policymakers have stated that monetary aggregates currently are unimportant to their decisions. Whatever the merits or otherwise of this attitude, external analysts and researchers continue to depend on monetary data to obtain an accurate picture of the stance of policy, and many other central banks throughout the world continue to report data on multiple monetary aggregates.
During the 30 years since the Congress excluded monetary policy from GAO audits, two of the monetary aggregates have been discontinued: the broad M3 and L aggregates. Only the narrow M1 and M2 aggregates remain. In addition, the Fed is almost alone among central banks in no longer gathering and supplying to the public interest rates paid by banks, leaving that data collection to private firms, which charge for access to the information.
Further, the M1 aggregate is severely biased downwards. Since 1994, banks have been permitted by the Federal Reserve to reclassify, for purposes of calculating legal reserve requirements, certain checking account balances, as if they were saving deposits. Banks supply to the Federal Reserve only the post-sweeps checking account data. The resulting published data on checking deposits understate — by approximately half — the amount of such deposits held by the public at banks. Again, it seems unlikely that such an omission would survive an unconstrained audit by persons qualified in economic index number theory.
With respect to the collection and publication of accurate data, creation of an independent data institute for monetary and financial data would be preferable to an expanded audit, since such institutes possess specialized expertise in economic measurement. There is an obvious potential for a conflict of interest in having data reported by the same agency that influences that data through its own policy actions.
These data problems have become so troubling that private organizations outside the Federal Reserve have begun filling the gaps independently.
These data problems have become so troubling that private organizations outside the Federal Reserve have begun filling the gaps independently. The Center for Financial Stability, a nonprofit think tank located in New York City, has begun supplying higher quality financial data than the Fed. The data include broad monetary aggregates, M3 and M4, no longer provided by the Fed, and with proper index number theoretic weighting of the components. Since 1922, when Irving Fisher’s famous book The Making of Index Numbers appeared, adding up imperfect substitutes has been disreputable. Would you add up subway trains and roller skates to measure transportation services?
Except for the Federal Reserve, all other data producing agencies in Washington, DC use the highly developed fields of aggregation and index number theory to weight components properly. Examples of correct aggregation include the Commerce Department’s National Accounts and the Labor Department’s Consumer Price Index. The Federal Reserve stands alone in Washington in computing un-weighted simple sums of such poor substitutes as highly-liquid currency and highly-illiquid nonnegotiable certificates of deposit to measure monetary services. Along with the Center for Financial Stability, the economics profession itself is now stepping in with the creation of a new society, the Society for Economic Measurement.
Good reason exists to question the quality and quantity of economic data available from the Federal Reserve. The cause of these inadequacies is the failure of the original design of the system to recognize the conflict of interests inherent in having a system with policy authority report the data that it itself influences. It is tempting to believe that routine GAO audits would solve all of these problems. But the Fed is very careful to make sure its books would be largely unscathed by a routine accounting audit. “Functional audits” by trained economists would be a different matter, and would be very unwelcome by the Fed.
Finally, and paradoxically, critics of expanded audit are frequently advocates of Congressional imposition of an interest-rate or inflation-targeting policy-rule on the Federal Reserve, with heavy penalties for missing the target. Such a rule would constrain the Federal Reserve’s discretionary policy authority far more than any audit.
William A. Barnett is the Oswald Distinguished Professor of Macroeconomics at the University of Kansas and Director of the Advances in Monetary and Financial Measurement Program at the Center for Financial Stability. The author of two books on economics and America’s financial system, he spent eight years on the staff of the Board of Governors of the Federal Reserve System in Washington, DC.