In an otherwise fascinating piece in the Financial Times detailing the darkest hour of the Euro crisis (sorry, the article is behind a pay wall), the author haphazardly throws in the line, "German opposition [to the ECB guaranteeing European sovereign debt] was
rooted in its dark history: the hyperinflation of the interwar years
that helped doom the Weimar Republic." It's the kind of line that is meant to add a universal gravitas to the article. It connects this meeting of world leaders in 2011 to the world altering events of the 1930s. If you're serious about politics, economics, or finance, then you shake your head in agreement when you read this sentence, recognizing that today's events possesses historical significance. The problem is, this sentence, while something that everyone seems to know, is absolutely false.
Yes, Weimar Germany experienced inflation... hyperinflation... but it had nothing to do with the downfall of the Republic or the rise of Nazi political power (the darkness referred to above). The hyperinflation occurred between 1921-1924. By late 1924, the Republic had issued a new currency, prices stabilized, the economy began to grow, and the period 1924-1929 became known as the Goldene Zwanziger (the Golden Twenties). Furthermore, the Nazi party, which was founded in 1920 with roots dating back to WWI, only won 6.5% of the popular vote in the first federal election in which they participated (1924). Four years later, their popularity shrank to a mere 2.6% share of the vote.
I find it very difficult squaring these election results with the claim that the hyperinflation of 1921-1924 had anything to do with the rise of Nazism, which dissolved the Republic in 1933. To explain the transition from Weimar Germany to Nazi Germany, one must look to the years between 1928 and 1932, when the Nazi party went from 2.6% of the vote to 18.3% in 1930 and then doubled again in 1932. What happened during these years? Well, the exact opposite of inflation (the red line is the price level).
Sunday, May 18, 2014
Sunday, March 16, 2014
Thoughts on Nick Rowe, Central Bank Operations, and the Money Multiplier
A new paper from the Bank of England about money creation in a modern economy has elicited a number of responses in the blogosphere over the last few days (here, here and here). Nick Rowe
has an especially clear monetarist critique of the paper. But I think his
response lays clear some of the deficiencies of market monetarism.
Nick starts off by arguing that QE is the same thing as normal open market operations (OMOs). I think he's right, but for the wrong reasons. He says, in both cases, "The central bank increases the money supply by buying something. See, it's easy!" But it's not quite that easy. The increase in the money supply does not come about through central bank asset purchases. In both scenarios, the money supply increases before the central bank buys something. Under normal operations, the money supply increases when a commercial bank originates a loan, thereby, creating a new deposit. Only after that occurs (and not always), the central bank buys something in order to add reserves to the system to ensure that all payments clear and reserve requirements are met. Under conditions of QE (ZIRP!), a non-bank bond holder sells a bond to a commercial bank, or a primary dealer, in exchange for a new deposit. The bank, or primary dealer, then sells the bond to the central bank. In both cases, the central bank buys something only after the money supply has already been increased. Therefore, it's not accurate to claim, as Nick does, that the central bank increases the money supply. Instead, the central bank increases the monetary base in response to an endogenous increase in the money supply.
But Nick has an answer for this:
How much money commercial banks create... depends on what the central bank is doing.
What central bank operations, exactly, do commercial banks respond to? As we've seen, for normal OMOs and QE, the central bank always supplies whatever amount of monetary base is demanded. So why would adjusting the monetary base beforehand make any difference? The answer for Nick is the money multiplier. He expresses this in three different ways:
the demand for base money is some proportion r of the stock of broad money
if the central bank wanted a temporary increase in the inflation rate, and so a permanent rise in the price level, it would need to shift the supply function of base money, to create a permanent rise in the monetary base, and a permanent rise in broad money, and the textbook money multiplier would tell us that broad money would increase by 1/r times the increase in base money.
if the central bank shifts the supply function of base money $1 to the right, that must increase the equilibrium stock of broad money by $(1/r).
The problem is that even if you accept the first two formulations, which are consistent with my description of central bank operations, they do not entail the third, which conflicts with my description. The first two express an ex-post mathematical relationship between the supply function of base money and the stock of broad money. The third formulation, however, expresses an ex-ante causal relationship running from the monetary base to broad money. In other words, while it may be true that if we want to increase the broad money supply, then we must increase the monetary base (If P, then Q), this does not mean that if we increase the monetary base (Q), then the money supply will increase (P). Nick, and his other market monetarist compatriots, are guilty of this logical error (affirming the consequent), and because of it, have a flawed conception of money creation and, ipso facto, monetary policy.
More on helicopter money later.
Nick starts off by arguing that QE is the same thing as normal open market operations (OMOs). I think he's right, but for the wrong reasons. He says, in both cases, "The central bank increases the money supply by buying something. See, it's easy!" But it's not quite that easy. The increase in the money supply does not come about through central bank asset purchases. In both scenarios, the money supply increases before the central bank buys something. Under normal operations, the money supply increases when a commercial bank originates a loan, thereby, creating a new deposit. Only after that occurs (and not always), the central bank buys something in order to add reserves to the system to ensure that all payments clear and reserve requirements are met. Under conditions of QE (ZIRP!), a non-bank bond holder sells a bond to a commercial bank, or a primary dealer, in exchange for a new deposit. The bank, or primary dealer, then sells the bond to the central bank. In both cases, the central bank buys something only after the money supply has already been increased. Therefore, it's not accurate to claim, as Nick does, that the central bank increases the money supply. Instead, the central bank increases the monetary base in response to an endogenous increase in the money supply.
But Nick has an answer for this:
How much money commercial banks create... depends on what the central bank is doing.
What central bank operations, exactly, do commercial banks respond to? As we've seen, for normal OMOs and QE, the central bank always supplies whatever amount of monetary base is demanded. So why would adjusting the monetary base beforehand make any difference? The answer for Nick is the money multiplier. He expresses this in three different ways:
the demand for base money is some proportion r of the stock of broad money
if the central bank wanted a temporary increase in the inflation rate, and so a permanent rise in the price level, it would need to shift the supply function of base money, to create a permanent rise in the monetary base, and a permanent rise in broad money, and the textbook money multiplier would tell us that broad money would increase by 1/r times the increase in base money.
if the central bank shifts the supply function of base money $1 to the right, that must increase the equilibrium stock of broad money by $(1/r).
The problem is that even if you accept the first two formulations, which are consistent with my description of central bank operations, they do not entail the third, which conflicts with my description. The first two express an ex-post mathematical relationship between the supply function of base money and the stock of broad money. The third formulation, however, expresses an ex-ante causal relationship running from the monetary base to broad money. In other words, while it may be true that if we want to increase the broad money supply, then we must increase the monetary base (If P, then Q), this does not mean that if we increase the monetary base (Q), then the money supply will increase (P). Nick, and his other market monetarist compatriots, are guilty of this logical error (affirming the consequent), and because of it, have a flawed conception of money creation and, ipso facto, monetary policy.
More on helicopter money later.
Saturday, March 8, 2014
Monetary Policy as Regressive Income Redistribution
Since the Volcker revolution of 1979, when monetary policy became the primary method of managing aggregate demand, wage growth has been stagnant. Now with the slightest prospect of an uptick in wages, the Fed is gearing up to raise interest rates in 2015 in order to slow the economy, even though the unemployment rate is still around 6.7% and the labor participation rate is at a 30 year low (63.0%).
Is this really the type of economic management we want for the country, where any hint of growth in middle class wages leads to the Fed slamming on the breaks? Notice that this policy decision, to use the central bank to manage aggregate demand instead of fiscal policy, coincides with a huge regressive redistribution of income and wealth.
Saturday, March 1, 2014
Artificial Intelligence or Creationism
At a dinner party the other night, as the wine flowed, or in my case, the Ice House, the conversation turned to the possibility of robot consciousness, or strong artificial intelligence (hereafter AI). What struck me about the arguments put forward against AI, was the belief that human consciousness is essentially mysterious. The working assumption for those denying the possibility of AI was that a purely physical, mechanistic process cannot fully explain the operations of the human mind. They argue that since we can never fully understand our own minds, we will never create artificial minds. But this puts the anti-AI crowd in a very awkward position. By asserting that human minds are ultimately mysterious, one not only disallows the possibility of artificial intelligence, one also rejects the process of Darwinian natural selection. In other words, if human consciousness is inexplicable without resorting to some mysterious, non-physical power, then the evolution of our species is inexplicable without resorting to some mysterious, non-physical power. Those who deny the possibility of artificial intelligence, in this way, implicitly endorse some form of creationism.
The dinner party anti-AI argument went something like this: Machines will never be smarter than humans because machines can only perform actions that are executable by their programming. The machine's program was written by a human. Therefore, the human does the thinking, while the machine only "unthinkingly" does what its programmed to do. Describing machines with properties like "thinking," "learning," or "having beliefs" is anthropomorphizing, since only humans really do these things, while machines can, at best, only simulate human behavior.
But this line of reasoning begs the question. One can't just assume that only humans think, learn, or believe; that's precisely the question under discussion. In order to validly deduce the conclusion that machines will never think for themselves, one must work from the premise that a human will never be able to write a program that allows a machine to think for itself. But this premise assumes humans will never be able to write a program that captures the way the human mind works. But why should we assume that? Once we fully understand how the human brain functions, then we should, in principle, be able to replicate that functioning. If humans think, learn, and have beliefs, then our replicated brain should think, learn, and have beliefs in exactly the way we, humans, do. And this is where the mysteriousness creeps in for the anti-AI folk. They must resort to denying that we can ever fully understand the human mind in physical, mechanistic terms. They must cling to a dualism separating the mind from the brain. But if you take that route, you should understand there's no place for dualism in a naturalist, Darwinian account of the evolution of our species.
So beware anti-AIists, if you can't stomach the possibility of robot consciousness, you better be able to stomach the dogma of creationism or Intelligent Design.
Recommended read: Dan Dennett's comparison of Darwin and Turing.
Saturday, February 22, 2014
Good Neuroscience, Bad Philosophy
I recently attended a talk by neurologist and author Robert Burton, entitled Certainty and the Self,
in which he presented some interesting results from psychological experiments, but
then quickly proceeded to non sequitur conclusions about knowledge, the self,
and free will. I came away with a strengthened recognition that thinking clearly
about these ideas is harder than it looks. One would think that a
smart guy like Burton, who's had his fingers in brains (literally) for
decades, would have something useful to tell us about our thinking
selves. Yet, I couldn't help but feel he was out of his depth.
While an expert on what happens inside the brain, Burton employed rather unsophisticated conceptions of knowledge, the self, and free will. He seemed to assume that knowledge requires certainty and the self and free will presuppose some mysterious non-material entity. But these assumptions just demonstrate a lack of familiarity with the current philosophical literature about these issues. Philosophers have long recognized that knowledge is fallible, and outside the department at Notre Dame, very few are dualists these days. I have noticed a tendency for scientists of different stripes, and laypeople for that matter, to roll their eyes at philosophers. I once overheard an economist (I'm using "scientist" in the loosest sense here) say, "Oh, you're a philosopher? Here's a nickel, tell me what you think." But if you want to avoid embarrassment when discussing philosophical issues like skepticism and free will, you may want to read what people who think about this stuff for a living actually have to say about the issues. Otherwise, you're liable to argue against straw men.
The problem, however, was not limited to a lack of familiarity with the philosophical literature. Making valid arguments also seems to be more difficult than I previously thought. Burton gave us an entertaining anecdote about witnesses of 9/11 misremembering what actually happened. The experiment yielded only about a 10% success rate for those claiming to remember, after 10 years had passed, what happened on that "unforgettable" day. One subject was so sure that his memory was correct that he insisted that what he had written in his journal a mere two hours after the incident must have been mistaken. But from this anecdote, Burton not only concluded that eyewitness testimony and memory are unreliable, but that this was also evidence that we lack conscious thought altogether. Where did that come from? Now, maybe he was trying to be provocative and get a rise from us, but he didn't stop there. He later claimed that neurological evidence shows that reasoning is subjective and idiosyncratic, and, therefore, no one way of reasoning is better than another (so why, again, should I pay attention to arguments made from neuroscience?).
At this point, I pushed back. I asked him, if he were truly committed to that position, was he willing to accept that believing the earth is flat is just as reasonable as believing its spherical. Surprise! He wasn't willing to accept that. In turn, he admitted that science gives us objective knowledge of the world, but circumscribed its domain to that which could be subjected to controlled, empirical experiments. Presumably, he meant the self and free will were not objects of scientific study, and therefore, only idiosyncratic opinions could be asserted about such matters. In other words, objectivity for the scientific world; relativism for everything else. But how does one know where to draw the line between science and non-science? In essence, Burton was reaffirming the dichotomy that led to Karl Popper's demarcation problem. The trouble is, all attempts to provide a criterion for distinguishing between science and pseudoscience ultimately fail. Either the criterion is too strong, and even physics fails the test, or it's too weak, and astrology passes. Or, surprisingly, a single criterion, like falsifiability, proves both too strong and too weak thanks to the underdetermination of theory by evidence. Again, if Burton were familiar with the philosophical literature in the philosophy of science, he would know he was heading down a dead-end.
All in all, Burton had good intentions. His skepticism was motivated by a disdain for hubris; he offers a warning against those who make absolute claims without the grounds to do so. Yet, his doubts about the self and free will stem more from an oversimplification of the concepts themselves than any neuroscientific evidence. Even more sinister, his assertion that reason is subjective and relative risks undermining the science on which he bases his conclusions. He unintentionally advanced a radical skepticism that leads to an anything-goes relativism that he doesn't actually endorse.
Recommended read: Eddy Nahmias offers a succinct philosophically sophisticated account of the scientific challenges to free will.
While an expert on what happens inside the brain, Burton employed rather unsophisticated conceptions of knowledge, the self, and free will. He seemed to assume that knowledge requires certainty and the self and free will presuppose some mysterious non-material entity. But these assumptions just demonstrate a lack of familiarity with the current philosophical literature about these issues. Philosophers have long recognized that knowledge is fallible, and outside the department at Notre Dame, very few are dualists these days. I have noticed a tendency for scientists of different stripes, and laypeople for that matter, to roll their eyes at philosophers. I once overheard an economist (I'm using "scientist" in the loosest sense here) say, "Oh, you're a philosopher? Here's a nickel, tell me what you think." But if you want to avoid embarrassment when discussing philosophical issues like skepticism and free will, you may want to read what people who think about this stuff for a living actually have to say about the issues. Otherwise, you're liable to argue against straw men.
The problem, however, was not limited to a lack of familiarity with the philosophical literature. Making valid arguments also seems to be more difficult than I previously thought. Burton gave us an entertaining anecdote about witnesses of 9/11 misremembering what actually happened. The experiment yielded only about a 10% success rate for those claiming to remember, after 10 years had passed, what happened on that "unforgettable" day. One subject was so sure that his memory was correct that he insisted that what he had written in his journal a mere two hours after the incident must have been mistaken. But from this anecdote, Burton not only concluded that eyewitness testimony and memory are unreliable, but that this was also evidence that we lack conscious thought altogether. Where did that come from? Now, maybe he was trying to be provocative and get a rise from us, but he didn't stop there. He later claimed that neurological evidence shows that reasoning is subjective and idiosyncratic, and, therefore, no one way of reasoning is better than another (so why, again, should I pay attention to arguments made from neuroscience?).
At this point, I pushed back. I asked him, if he were truly committed to that position, was he willing to accept that believing the earth is flat is just as reasonable as believing its spherical. Surprise! He wasn't willing to accept that. In turn, he admitted that science gives us objective knowledge of the world, but circumscribed its domain to that which could be subjected to controlled, empirical experiments. Presumably, he meant the self and free will were not objects of scientific study, and therefore, only idiosyncratic opinions could be asserted about such matters. In other words, objectivity for the scientific world; relativism for everything else. But how does one know where to draw the line between science and non-science? In essence, Burton was reaffirming the dichotomy that led to Karl Popper's demarcation problem. The trouble is, all attempts to provide a criterion for distinguishing between science and pseudoscience ultimately fail. Either the criterion is too strong, and even physics fails the test, or it's too weak, and astrology passes. Or, surprisingly, a single criterion, like falsifiability, proves both too strong and too weak thanks to the underdetermination of theory by evidence. Again, if Burton were familiar with the philosophical literature in the philosophy of science, he would know he was heading down a dead-end.
All in all, Burton had good intentions. His skepticism was motivated by a disdain for hubris; he offers a warning against those who make absolute claims without the grounds to do so. Yet, his doubts about the self and free will stem more from an oversimplification of the concepts themselves than any neuroscientific evidence. Even more sinister, his assertion that reason is subjective and relative risks undermining the science on which he bases his conclusions. He unintentionally advanced a radical skepticism that leads to an anything-goes relativism that he doesn't actually endorse.
Recommended read: Eddy Nahmias offers a succinct philosophically sophisticated account of the scientific challenges to free will.
Saturday, February 15, 2014
Robot Suffrage Now!
After recently discovering this thing called the technological singularity, I want to be on the right side of history here. I may be jumping the gun by a few decades (or centuries), but with my first post, I, hereby, attest to my future robot overlords: I'm on your side.
Robot suffrage now and forever!
Subscribe to:
Posts (Atom)