Book Review: The Black Swan

I.

Writing a review of The Black Swan is a nerve-wracking experience.

First, because it forces me to reveal I am about ten years behind the times in my reading habits.

But second, because its author Nassim Nicholas Taleb is infamous for angry Twitter rants against people who misunderstand his work. Much better men than I have read and reviewed Black Swan, messed it up, and ended up the victim of Taleb’s acerbic tongue.

One might ask: what’s the worst that could happen? A famous intellectual yells at me on Twitter for a few minutes? Isn’t that normal these days? Sure, occasionally Taleb will go further and write an entire enraged Medium article about some particularly egregious flub, but only occasionally. And even that isn’t so bad, is it?

But such an argument betrays the following underlying view:

It assumes that events can always be mapped onto a bell curve, with a peak at the average and dropping off quickly as one moves towards extremes. Most reviews of Black Swan will get an angry Twitter rant. A few will get only a snarky Facebook post or an entire enraged Medium article. By the time we get to real extremes in either directions – a mere passive-aggressive Reddit comment, or a dramatic violent assault – the probabilities are so low that they can safely be ignored.

Some distributions really do follow a bell curve. The classic example is height. The average person is about 5’7. The likelihood of anyone being a different height drops off dramatically with distance from the mean. Only about one in a million people should be taller than 7 feet; only one in a billion should be as tall as 7’5. Nobody is order-of-magnitude differences in height from anyone else. Taleb calls the world of bell curves and minor differences Mediocristan. If Taleb’s reaction to bad reviews dwells alongside height in Mediocristan, I am safe; nothing an order-of-magnitude difference from an angry Twitter rant is likely to happen in entire lifetimes of misinterpreting his work.

But other distributions are nothing like a bell curve. Taleb cites power-law distributions as an example, and calls their world Extremistan. Wealth inequality lives in Extremistan. If wealth followed a bell curve around the median household income of $57,000, and a standard deviation scaled the same way as height, then a rich person earning $70,000 would be as remarkable as a tall person hitting 7 feet. Someone who earned $76,000 would be the same kind of prodigy of nature as the 7’6 Yao Ming. Instead, people earning $70,000 are dirt-common, some people earn millions, and the occasional tycoon can make hundreds of millions of dollars per year. In Mediocristan, the extremes don’t matter; in Extremistan, sometimes only the extremes matter. If you have a room full of 99 average-height people plus Yao Ming, Yao only has 1.3% of the total height in the room. If you have a room full of 99 average-income people plus Jeff Bezos, Bezos has 99.99% of the total wealth.

Here are Taleb’s potential reactions graphed onto a power-law distribution. Although the likelihood of any given reaction continues to decline the further it is away from average, it declines much less quickly than on the bell curve. Violent assault is no longer such a remote possibility; maybe my considerations should even be dominated by it.

So: are book reviews in Mediocristan or Extremistan?

I notice this BBC article about an author who hunted down a bad reviewer of his book and knocked her unconscious with a wine bottle. And Lord Byron wrote such a scathing meta-review of book reviewers that multiple reviewers challenged him to a duel, but the duel seems to have never taken place, plus I’m not sure Lord Byron is a good person to generalize from.

19th century intellectuals believed a bad review gave John Keats tuberculosis; they were so upset about this that they used his gravestone to complain:

Keats’ friend Shelley wrote the poem Adonais to memorialize the event, in which he said of the reviewer:

Our Adonais has drunk poison—oh!
What deaf and viperous murderer could crown
Life’s early cup with such a draught of woe?
The nameless worm would now itself disown:
It felt, yet could escape, the magic tone
Whose prelude held all envy, hate and wrong,
But what was howling in one breast alone,
Silent with expectation of the song,
Whose master’s hand is cold, whose silver lyre unstrung.

So are book reviews in Mediocristan or Extremistan? Well, every so often your review causes one of history’s greatest poets to die of tuberculosis, plus another great poet writes a five-hundred-line poem condemning you and calling you a “nameless worm”, and it becomes a classic that gets read by millions of schoolchildren each year for centuries after your death. And that’s just the worst thing that’s happened because of a book review so far. The next one could be even worse!

II.

This sounds like maybe an argument for inaction, but Taleb is more optimistic. He points out that black swans are often good. For example, pharma companies usually just sit around churning out new antidepressants that totally aren’t just SSRI clones they swear. If you invest in one of these companies, you may win a bit if their SSRI clone succeeds, and lose a bit if it fails. But drug sales fall on a power law; every so often companies get a blockbuster that lets them double, triple, or dectuple their money. Tomorrow a pharma company might discover the cure for cancer, or the cure for aging, and get to sell it to everyone forever. So when you invest in a pharma company, you have randomness on your side: the worst that can happen is you lose your money, but the best that can happen is multiple-order-of-magnitude profits.

Taleb proposes a “barbell” strategy of combining some low-risk investments with some that expose you to positive black swans:

If you know that you are vulnerable to prediction errors, and if you accept that most “risk measures” are flawed, because of the Black Swan, then your strategy is to be as hyperconservative and hyperaggressive as you can be instead of being mildly aggressive or conservative. Instead of putting your money in “medium risk” investments (how do you know it is medium risk? by listening to tenure-seeking “experts”?), you need to put a portion, say 85 to 90 percent, in extremely safe instruments, like Treasury bills—as safe a class of instruments as you can manage to find on this planet. The remaining 10 to 15 percent you put in extremely speculative bets, as leveraged as possible (like options), preferably venture capital-style portfolios.* That way you do not depend on errors of risk management; no Black Swan can hurt you at all, beyond your “floor,” the nest egg that you have in maximally safe investments. Or, equivalently, you can have a speculative portfolio and insure it (if possible) against losses of more than, say, 15 percent. You are “clipping” your incomputable risk , the one that is harmful to you. Instead of having medium risk, you have high risk on one side and no risk on the other. The average will be medium risk but constitutes a positive exposure to the Black Swan […]

The “barbell” strategy [is] taking maximum exposure to the positive Black Swans while remaining paranoid about the negative ones. For your exposure to the positive Black Swan, you do not need to have any precise understanding of the structure of uncertainty. I find it hard to explain that when you have a very limited loss you need to get as aggressive, as speculative, and some­times as “unreasonable” as you can be.

So: how good can a book review get?

Here’s a graph of all the book reviews I’ve ever done by hit count (in thousands). I’m not going to calculate out, but it looks like a power law distribution! Some of my book reviews have been pretty successful – my review of Twelve Rules got mentioned in The Atlantic. Can things get even better than that? I met my first serious girlfriend through a blog post. Can things get even better than that? I had someone tell me a blog post on effective altruism convinced them to pledge to donate 10% of their salary to efficient charities forever; given some conservative assumptions, that probably saves twenty or thirty lives. So a book review has a small chance of giving a great poet tuberculosis, but also a small chance of saving dozens of lives. Overall it probably seems worth it.

III.

The Black Swan uses discussions of power laws and risk as a jumping-off point to explore a wider variety of topics about human fallibility. This places it in the context of similar books about rationality and bias that came out around the same time. I’m especially thinking of Philip Tetlock’s Superforecasting, Nate Silver’s The Signal And The Noise, Daniel Kahneman’s Thinking Fast And Slow, and of course The Sequences. The Black Swan shares much of its material with these – in fact, it often cites Kahneman and Tetlock approvingly. But aside from the more in-depth discussion of risk, I notice two important points of this book Taleb keeps coming back to again and again, which as far as I know are unique to him.

The first is “the ludic fallacy”, the false belief that life works like a game or a probability textbook thought experiment. Taleb cautions against the (to me tempting) mistake of comparing black swans to lottery tickets – ie “investing in pharma companies is like having a lottery ticket to win big if they invent a blockbuster”. The lottery is a game where you know the rules and probabilities beforehand. The chance of winning is whatever it is. The prize is whatever it is. You know both beforehand; all you have to do is crunch the numbers to see if it’s a good deal.

Pharma – and most other real-life things – are totally different. Nobody hands you the chance of a pharma company inventing a blockbuster drug, and nobody hands you the amount of money you’ll win if it does. There is Knightian uncertainty – uncertainty about how much uncertainty there is, uncertainty that doesn’t come pre-quantified.

Taleb gives cautionary examples of what happens if you ignore this. You make some kind of beautiful model that tells you there’s only a 0.01% chance of the stock market doing some particular bad thing. Then you invest based on that data, and the stock market does that bad thing, and you lose all your money. You were taking account of the quantified risk in your model, but not of the unquantifiable risk that your model was incorrect.

In retrospect, this is an obvious point. But it’s also obvious in retrospect that everything classes teach about probability fall victim to it, to the point where it’s hard to even think about probability in non-ludic terms. I keep having to catch myself writing some kind of “Okay, assume the risk of a Black Swan is 10%…” example in this review, because then I know Taleb will hunt me down and violently assault me. But it’s hard to resist.

I would like to excuse myself by saying it’s impossible to discuss probability without these terms, or at least that you have to start by teaching these terms and then branch into the real-world unquantifiable stuff, except that Taleb managed to write his book without doing either of those things. Granted, the book is a little bit weird. You could go through several chapters on the Lebanese Civil War or whether the French Third Republic had the best intellectuals, without noticing it was a book on probability. Nevertheless, it sets itself the task of discussing risk without starting with the ludic fallacy, and it succeeds.

I don’t know to what degree the project of “becoming well-calibrated with probabilities” is a solution to the ludic fallacy, or a case of stupidly falling victim to the ludic fallacy.

The second key concept of this book – obviously not completely original to Taleb, but I think Taleb gives it a new meaning and emphasis – is “Platonicity”, the anti-empirical desire to cram the messy real world into elegant theoretical buckets. Taleb treats the bell curve as one of the clearest examples; it’s a mathematically beautiful example of what certain risks should look like, so incompetent statisticians and economists assume that risks in a certain domain do fit the model.

He ties this into Tetlock’s “fox vs. hedgehog” dichotomy. The prognosticators who tried to fit everything to their theory usually did badly; the ones who accepted the complexity of reality and maintained a toolbox of possibilities usually did better.

He also mentions – and somehow I didn’t know this already – that modern empiricism descends from Sextus Empiricus, a classical doctor who popularized skeptical and empirical ideas as the proper way to do medicine. Sextus seems like a pretty fun guy; his surviving works include Against The Grammarians, Against The Rhetoricians, Against The Geometers, Against The Arithmeticians, Against The Astrologers, Against The Musicians, Against The Logicians, Against The Physicists, and Against The Ethicists. Medicine is certainly a great example of empiricism vs. Platonicity, with Hippocrates and his followers cramming everything into their preconceived Four Humors model – to the detriment of medical science – for thousands of years.

But Empiricus’ solution – to not hold any beliefs, and to act entirely out of habit – falls short. And I am not sure I understood what Taleb is arguing for here. There’s certainly a true platitude in this area (wait, does “platitude” share a root with “Platonic”? It looks like both go back to a Greek word meaning “broad”, but it’s not on purpose. Whatever.) of “try to go where the evidence guides you instead of having prejudices”. But there’s also a point on the other side – unless you have some paradigm to guide you, you exist in a world of chaotic noise. I am less sanguine than Taleb that “be empiricist, not theoretical” is sufficient advice, as opposed to “find the Golden Mean between empiricism and theory” – which is of course a much harder and more annoying adage, since finding a Golden Mean isn’t trivial.

That is – what would it mean for a doctor to try to do medicine without the “theory” that the heart pumped the blood? You’d find a patient with all the signs of cardiogenic shock, and say “Eh, I dunno. Maybe I should x-ray his feet, or something?” What if she had no preconceived ideas at all? Would she start reciting Sanskrit poetry, on the grounds that there’s no reason to think that would help more or less than anything else? Whereas a doctor who had read a lot of medical school textbooks – Taleb hates textbooks! – would immediately recognize the signs of cardiogenic shock, do the tests that the textbooks recommend, and give the appropriate treatment.

Yes, eventually an empiricist doctor would notice empirical facts that made her believe the heart pumped blood (and all the other true things). But then she would…write it down in a textbook. That’s what theories are – crystallized, compressed empiricism.

I think maybe Empiricus and Taleb would retort that some people form theories with only a smidgeon of evidence – I don’t know what evidence Hippocrates had for the Four Humors, but it clearly wasn’t enough. And then they stick to them dogmatically even when the evidence contradicts them. I agree with both criticisms. But then it seems like the problem is bad theories, rather than ever having theories at all. Four Humors Theory and Germ Theory are both theories – it’s just that one is wrong and the other is right. If nobody had ever been willing to accept the germ theory of disease, we’d be in a much worse place. And you can’t just say “Well, you could atheoretically notice antibiotics work and use them empirically” – much of the research into antibiotics, and the ways we use antibiotics, are in place because we more or less understand what they’re doing.

I would argue that Empiricus and Taleb are arguing not for experience over theory, but for the adjustment of certain parameters of inference – how much fudge factor we accept in compressing our data, how much we weight prior probabilities versus new evidence, how surprised to be at evidence that doesn’t fit our theories. I expect Empiricus, Taleb, and I are in agreement about which direction we want those parameters shifted. I know this sounds like a boring intellectual semantic point, but I think it’s important and occasionally saves your life if you’re practicing some craft like medicine that has a corpus of theory built up around it which you ignore at your peril.

(I also think they fail to understand the degree to which common sense is just under-the-hood inference in the same way that abstract theorizing is above-the-hood inference, and so doesn’t rescue us from these concerns).

Charitably, The Black Swan isn’t making the silly error of denying a Golden Mean of parameter position. It’s just arguing that most people today are on the too-Platonic side of things, and so society as a whole needs to shift the parameters toward the more-empirical side. Certainly this is true of most people in the world Nassim Nicholas Taleb inhabits. In Taleb’s world famous people walk around all day asserting “Everything is on a bell curve! Anyone who thinks risk is unpredictable is a dangerous heretic!” Then Taleb breaks in past their security cordon and shouts “But what if things aren’t on a bell curve? What if there are black swans?!” Then the famous person has a rage-induced seizure, as their bodyguards try to drag Taleb away. Honestly it sounds like an exciting life.

Lest you think I am exaggerating:

The psychologist Philip Tetlock (the expert buster in Chapter 10) , after listening to one of my talks, reported that he was struck by the presence of an acute state of cognitive dissonance in the audience. But how people resolve this cognitive tension, as it strikes at the core of everything they have been taught and at the methods they practice, and realize that they will continue to practice, can vary a lot. It was symptomatic that almost all people who attacked my thinking attacked a deformed version of it, like “it is all random and unpredictable” rather than “it is largely random,” or got mixed up by showing me how the bell curve works in some physical domains. Some even had to change my biography. At a panel in Lugano, Myron Scholes once got in to a state of rage, and went after a transformed version of my ideas. I could see pain in his face. Once, in Paris, a prominent member of the mathematical establishment, who invested part of his life on some minute sub-sub-property of the Gaussian, blew a fuse—right when I showed empirical evidence of the role of Black Swans in markets. He turned red with anger, had difficulty breathing, and started hurling insults at me for having desecrated the institution, lacking pudeur (modesty); he shouted “I am a member of the Academy of Science!” to give more strength to his insults.

One hazard of reviewing books long after they come out is that, if the book was truly great, it starts sounding banal. If its points were so devastating and irrefutable that they became universally accepted, then it sounds like the author is just spouting cliches. I think The Black Swan might have reached that level of influence. I haven’t even bothered explaining the term “black swan” because I assume every educated reader now knows what it means. So it seems very possible that pre-book society was so egregiously biased toward the Platonic theoretical side that it needed someone to tell it to shift in the direction of empiricism, Taleb did that, and now he sounds silly because everyone knows that you can’t just declare everything a bell curve and call it a day. Maybe this book should be read backwards. But the nature of all mental processes as a necessary balance between theory and evidence is my personal hobby-horse, just as evidence being good and theory being bad is Taleb’s personal hobby-horse, so I can’t let this pass without at least one hobby-cavalry-duel.

I have a more specific worry about skeptical empiricism, which is that it seems like an especially dangerous way to handle Extremistan and black swans.

Taleb memorably compares much of the financial world to “picking up pennies in front of a steamroller” – ie, it is very easy to get small positive returns most of the time as long as you expose yourself to horrendous risk.

EG: imagine living in sunny California and making a bet with your friend about the weather. Each day it doesn’t rain, he gives you $1. Each day it rains, you give him $1000. Your friend will certainly take this bet, since long-term it pays off in his favor. But for the first few months, you will look pretty smart as you pump him out of a constant stream of free dollars. Your stupidity will only become apparent way down the line, when one of the state’s rare rainstorms arrives and you’re on the hook for much more than you won.

Here the theorist will calculate the probability of rain, calculate everybody’s expected utility, and predict that your friend will eventually come out ahead.

But the good empiricist will just watch you getting a steady stream of free dollars, and your friend losing money every day, and say that you did the right thing and your friend is the moron!

More generally, as long as Black Swans are rare enough not to show up in your dataset, empiricists are likely to fall for picking-pennies-from-in-front-of-steamroller bets, whereas (sufficiently smart) theorists will reject them.

For example, Banker 1 follows a strategy that exposes herself terribly to black swan risk, and ensures she will go bankrupt as soon as the market goes down, but which makes her 10% per year while the market is going up. Banker 2 follows a strategy that protects herself against black swan risk, but only makes 8% per year while the market is going up. A naive empiricist will judge them by their results, see that Banker 1 has done better each of the past five years, and give all his money to Banker 1, with disastrous results. Somebody who has a deep theoretical understanding of the underlying territory might be able to avoid that mistake.

This problem also comes up in medicine. Imagine two different drugs. Both cure the same disease and do it equally well. Drug 1 has a side effect of mild headache in 50% of patients. Drug 2 has a side effect of death in 0.01% of patients. I think a lot of doctors test both drugs, find that Drug 2 always results in less hassle and happier patients, and stick with it. But this is plausibly the wrong move and a good understanding of the theory would make them much more cautious.

(yes, both of these examples are also examples of the ludic fallacy. I fail, sorry.)

Overall this seems like a form of Goodhart’s Law, where any attempt to measure something empirically risks having people optimize for your measurement in a way that makes all the unmeasurable things worse. Black swan risks are one example of an unmeasurable thing; you can’t really measure how common or how bad they are until they happen. If you focus entirely on empirical measurement, you’ll incentivize people to take any trade that improves ordinary results at the cost of increasing black swan risk later. If you want to prevent that, you need a model that includes the possibility of black swan risk – which is going to involve some theory.

Nassim Taleb has been thinking about this kind of thing his whole life and I’m sure he hasn’t missed this point. Probably we are just using terms differently. But I do think the way he uses terms minimizes concern about this type of error, and I do worry the damage can sometimes be pretty large.

IV.

I previously mentioned that The Black Swan seems to stand in the tradition of other rationality books like Thinking Fast And Slow and The Signal And The Noise. Is this a fair analysis? If so, what do we make of this tradition?

While Taleb has nothing but praise for eg Kahneman, his book also takes a very different tone. For one thing, it’s part-autobiography / diary / vague-thought-log of Taleb, who is a very interesting person. I read some reviews saying he “needed an editor”, and I understand the sentiment, but – does he? Yes, his book is weird and disconnected. It’s also really fun to read, and sold three million copies. If people who “need an editor” often sell more copies than people who don’t, and are more enjoyable, are we sure we’re not just arbitrarily demanding people conform to a certain standard of book-writing that isn’t really better than alternative standards? Are we sure it’s really true that you can’t just stick several chapters about the biography of a fake Russian author into the middle of your book for no reason, without admitting that it’s fake? Are you sure you can’t insert a thinly-disguised version of yourself into the story about the Russian author, have yourself be such a suave and attractive individual that she falls for you and you start a torrid love affair, and then make fun of her cuckolded husband, who is suspiciously similar to the academics you despise? Are you sure this is an inappropriate thing to do in the middle of a book on probability? Maybe Nate Silver would have done it too if he had thought of it first.

Also sort of surprising: Taleb hates nerds. He explains:

To set the terminology straight, what I call “a nerd” here doesn’t have to look sloppy, unaesthetic, and sallow, and wear glasses and a portable computer on his belt as if it were an ostensible weapon. A nerd is simply someone who thinks exceedingly inside the box.

Have you ever wondered why so many of these straight-A students end up going nowhere in life while someone who lagged behind is now getting the shekels, buying the diamonds, and getting his phone calls returned? Or even getting the Nobel Prize in a real discipline (say, medicine)? Some of this may have something to do with luck in outcomes, but there is this sterile and obscurantist quality that is often associated with classroom knowledge that may get in the way of understanding what’s going on in real life. In an IQ test, as well as in any academic setting (including sports), Dr. John would vastly outperform Fat Tony. But Fat Tony would outperform Dr. John in any other possible ecological, real-life situation. In fact, Tony, in spite of his lack of culture, has an enormous curiosity about the texture of reality, and his own erudition—to me, he is more scientific in the literal, though not in the social, sense than Dr. John.

Going after nerds in your book contrasting Gaussian to power law distributions, with references to the works of Poincaré and Popper, is a bold choice. It also separates Taleb from the rest of the rationality tradition. I interpret eg The Signal And The Noise as pro-nerd. Its overall thesis is “Ordinary people are going around being woefully biased about all sorts of things. Good thing that bright people like Nate Silver can use the latest advances in statistics to figure out where they are going wrong, do the hard work of processing the statistical signal correctly, and create a brighter future for all of us.” Taleb turns that on its head. For him, ordinary people – taxi drivers, barbers, vibrant salt-of-the-earth heavily-accented New Yorkers – are the heroes, who know what’s up and are too sensible to go around saying that everything must be a bell curve, or that they have a clever theory which proves the market can never crash. It’s only the egghead intellectuals who could make such an error.

I am not sure this is true – my last New York taxi driver spent the ride explaining to me that he was the Messiah, which seems like an error on some important axis of reasoning that most intellectuals get right. But I understand that some of Taleb’s later works – Antifragile and Skin In The Game – may address more of what he means by this. It looks like Kahneman, Silver, et al are basically trying to figure out what doing things optimally would look like – which is a very nerdy project. Taleb is trying to figure out how to run systems without an assumption that you will necessarily be right very often.

I am reminded of the example of doctors being asked probability questions, about whether a certain finding on a mammogram implies X probability of breast cancer. The doctors all get this horribly wrong, because none of them ever learned anything about probability. But after getting every question on the test wrong, they will go and perform actions which are basically optimized for correctly diagnosing and treating breast cancer, even though their probability-related answers imply they should do totally different things.

I see Kahneman, Tetlock, Silver, and Yudkowsky as all being in the tradition of finding optimal laws of probability that point out why the doctors are wrong, and figuring out how to train doctors to answer probability questions right. I see Taleb as being on the side of the doctors – trying to figure out a system where the right decisions get made whether anyone has a deep mathematical understanding of the situation or not. Taleb appreciates the others’ work – you have to know something about probability before you can discuss why some systems tend towards getting it right vs. getting it wrong – but overall he agrees that “rationality is about winning” – the doctor who eventually gives the right treatment is better than a statistician who answers all relevant math questions correctly but has no idea what to do.

Relatedly, I think Taleb’s critique of nerds works because he’s trying to resurrect a Greco-Roman concept of the intellectual – arete and mens sana in corpore sano and all that – and clearly uses “nerd” to mean everything about modern faux-intellectuals that falls short of his vision. Thales cornering the market on olive presses is his kind of guy, and he doesn’t think that all of the people who have rage-induced seizures when he whispers the phrase “power law distribution” in their ears really cut it. His book is both a discussion of his own area of study (risk), and a celebration of and guide to what he thinks intellectualism should be. I might have missed the section of Marcus Aurelius where he talks about how angry Twitter rants are a good use of your time, but aside from that I think the autobiographical parts of the book make a convincing aesthetic argument that Taleb is living the dream and we should try to live it too.

Perhaps relating to this, of Taleb, Silver, Tetlock, Yudkowksy, and Kahneman, Taleb seems to have stuck around longest. All of them continue to do great object-level work in their respective fields, but it seems like the “moment” for books about rationality came and passed around 2010. Maybe it’s because the relevant science has slowed down – who is doing Kahneman-level work anymore? Maybe it’s because people spent about eight years seeing if knowing about cognitive biases made them more successful at anything, noticed it didn’t, and stopped caring. But reading The Black Swan really does feel like looking back to another era when the public briefly became enraptured by human rationality, and then, after learning a few cool principles, said “whatever” and moved on.

Except for Taleb. I’m excited to see he’s still working in this field and writing more books expanding on these principles. I look forward to reading the other books in this series.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

273 Responses to Book Review: The Black Swan

  1. sohois says:

    I would actually argue that Antifragile is Taleb’s best work – though perhaps, as you say, this is because the Black Swan became so widely disseminated that a very interesting central idea had become banal by the time I got around to reading it. I would argue that another issue the Black Swan had was repitition of a lot of material from Taleb’s first book, Fooled by Randomness. They both cover a lot of the same material and in some ways Black Swan is just Fooled by Randomness with the addition of the Black Swan concept.

    Antifragile similarly feels quite like Taleb’s earlier books, but it’s much more of a synthesis of the strongest qualities of the earlier works.

    That being said, I’ve yet to read Skin in the Game or the Bed of Procrustes, so my opinion may change.

    • jgr314 says:

      Dynamic Hedging is his best book, but it has a (much more) technical focus and will only appeal to a few. If you want to see inside the mind of a derivatives trader, it is very good.

    • fr8train_ssc says:

      Agreed. I hope Scott actually reads the whole Incerto Series, but it’s worthwhile jumping around. I started with the Black Swan myself, but argue that it’s the best ‘Introduction’ to the series, while Antifragile is probably best read second.

      Scott delves partially into the themes of Antifragile in section 3 of his review. The Whole “Pay $1 for every non-rainy day, but collect $1000 on a Rainy one” goes into Concave and Convex risk/payoff functions. Convex risk would be the person paying $1 most of the time. They have a fixed loss rate, but will benefit from the unlikely rainy day. Concave on the other hand, has fixed gains at the cost of catastrophic loss.

      This model isn’t that far-fetched, since it’s basically the equivalent to how insurance works. Taleb’s key point is that in many cases, someone’s convex function is someone else’s concave function. If we have to organize society in this way, is it better to have one giant stakeholder possessing a concave risk profile (i.e. a bank too big to fail) or better to have multiple smaller stakeholders, each with a smaller payoff/risk profile, and organized such that their failures aren’t systematic or correlated.

    • jasmith79 says:

      Bed of Procrustes is an book of interesting and frequently profound sound bites (aphorisms), so while worth the read it isn’t a “complete work” per se. Antifragile is his Magnum Opus: it’s an amalgamation of all of his good ideas. That sort of work does not generally lend itself to interesting book reviews.

    • Maxwell says:

      Writing the same book twice is yet another of Taleb’s idiosyncrasies. It worked; the second time was a success. Perhaps more authors should try it.

  2. Bla Bla says:

    The replication crisis put in question many of the statements you find in Kahneman, Sequences, etc marring the luster of the entire rationalist genre.

    • Luke Perrin says:

      Which statements? I know that almost all of the “priming” results failed to replicate, but they’re not exactly central to Kahneman and Yudkowsky’s theses. Which of the heuristics-and-biases results failed to replicate?

      • 2181425 says:

        I can’t speak from for Bla Bla, but from p57 of the paperback version of Kahneman’s TF&S regarding priming:

        The idea you should focus on, however, is that disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true. More important, you must accept that they are true about you.

        I found this jarring the first time I read it, but that was post-replication crisis and small-n psych studies were already suspect. Insisting that your ideas are so correct as to be beyond question is a high-risk strategy.

        • Randy M says:

          Wow, was that really in regards to recent effects established by one-off sociological studies that haven’t replicated since?

          • joshuabecker says:

            Perhaps it was in regards to the fact that priming as a general class of effects replicates just fine—priming effects are real—even if many specific priming studies fail to replicate.

            show me the word “dog” and I’ll be much faster to identify a cat.

          • Gazeboist says:

            It was not.

            Also: it is not the case that “priming as a general class of effects replicates just fine”, and the article you linked doesn’t say that (even though it’s trying to). What’s going on is that there were two classes of priming effects (although the article doesn’t quite refer to them as such, since it calls everything “behavioral” priming): semantic priming, which is the recognition stuff you’re talking about, and what I would call “behavioral” priming, where those semantic effects go on to have significant behavioral effects. This is the more ludicrous stuff that has failed to replicate – students primed with “Florida” walking slower, power posing, nodding along makes you believe what you’re being told, etc.

            Implicit bias isn’t doing as badly, but it’s not in great shape.

        • nobody.really says:

          Disclaimer: I read Thinking Fast and Slow, but not Black Swan.

          The replication crisis notwithstanding, should Kahneman feel chagrin over this quote?

          Arguably the biggest conflict between Kahneman and Taleb’s work is Kahneman’s support for relying on making decisions based on fixed algorithms rather than making individualized, case-by-case judgments. As a guy who has won a Nobel Prize studying all the ways people make predictably erroneous judgments, it’s hardly surprising that Kahneman would encourage people to follow mechanisms that help them avoid such foreseeable errors.

          Arguably, one of Kahneman’s algorithms is to accept and internalize the lessons of cognitive bias research, even if it seems counter-intuitive. Taleb (and others here) point out that following this algorithm will lead to error. And they’re right.

          In contrast, according to Alexander, Taleb praises people who follow a more intuitive, less studied approach to decision-making. And doubtless this strategy will ALSO lead to error.

          So the question is, which path will lead to less error, and less drastic error?

          It may well be true that the “hot hand” research proved to be erroneous. And it may well be true that the latest revision to that research will itself be found to be erroneous. It’s turtles all the way down. Thus, I can’t see any more basis to condemn Kahneman for relying on the state-of-the-art research when he was writing than to condemn people who cite the LASTEST state-of-the-art research to challenge him. Both groups seem to be relying on the state-of-the-art research. This seems rational to me—even acknowledging that the state-of-the-art will change over time.

          Now, maybe people aren’t condemning Kahneman for relying on the then state-of-the-art research, but for his immoderate endorsement of that research. However, I don’t understand Kahneman to be making an academic point, but a behavioral one. Kahneman’s point is that the nature of cognitive biases is that the conclusions will pretty much ALWAYS seem counter-intuitive; that’s the nature of a cognitive bias. So again, we must choose our algorithm: Do we accept and internalize the research, knowing both that the findings seem counter-intuitive and may be wrong? Or do we subject the findings to our own intuitive analysis—knowing that this analysis will almost certainly lead us to conclude that the findings, whatever their merits in the abstract, do not apply to us because we’re far too clever to fall for such biases?

          In short, I surmise that Kahneman embraces a reliance on research the way Churchill embraces democracy: as the worst conceivable option—except for all the others. Where some might regard Kahneman’s statement as an expression of hubris, I read it as a statement of humility about the human condition: No one should regard him- or herself as above the influence of cognitive biases. He is Odysseus, exhorting us to bind ourselves to the mast so that we are not lured by the Siren’s song to our predictable doom.

      • Gazeboist says:

        The Yudkowskian concern about “contamination” fall apart, but that’s mostly a basis for attacking other ideas rather than defending his own.

    • Wesley Mathieu says:

      Interesting that I’ve noticed that Taleb himself has been much less keen on Kahneman lately, despite speaking so well of him in earlier books and conducting a glowing interview/seminar with him a few years back.

      https://www.youtube.com/watch?v=MMBclvY_EMA

      While Taleb hasn’t openly spoken against Kahneman, in Skin in the Game he vehemently calls out the concept of ‘risk aversion,’ one of Kahneman’s key insights, as ‘fictional nonsense.’

      https://twitter.com/nntaleb/status/901472828310114304?lang=en

      I have to see this as him condemning the previous praise for Kahneman without going after the man himself.

      • Eponymous says:

        Hmm. My first thought was, “Wesley must be mixing up ‘risk aversion’ with ‘loss aversion’.” But now I see that the error seems to be on Taleb’s part? I’m a little confused here.

        Risk aversion = curvature of utility function. Loss aversion (related to prospect theory, which I believe is basically Kahneman’s view), means an asymmetry of the utility function around gains and losses, deriving from psychological “anchoring” type effects.

        I think what’s going on here is that Kahneman and Taleb both criticize financial theory, but they diverge in precisely what they complain about. Kahneman advocates behavioral economics, whereas Taleb doesn’t like behavioral economics but instead thinks that investors are actually being perfectly rational, but that standard models neglect black swans and risk of ruin.

        I don’t understand why Taleb criticizes “risk aversion” however.

        Also, the quoted snippet makes a separate point (that your attitudes towards risk depend on what else is in your portfolio), which is fair enough.

        • Wesley Mathieu says:

          The argument Taleb makes is that ‘loss aversion’ is in fact rational because outside of a laboratory, a person isn’t just risking losing out on some quantified benefit, they are potentially going to experience a loss that they cannot recover from, up to and including death.

          Like, if your experiment is giving someone a choice between one option with a 90% chance of winning $1000 and a 10% chance of getting nothing and and one option of 100% chance of winning $500, on the balance it looks absurd to pick the second option.

          But in a real-life scenario, a person who is financially strapped can realize that NOT getting that $1000 is not just a stroke of bad luck, it could literally cause them to go without food, be unable to make rent, pay for gas, etc. which will have long-term consequences on their life.

          So in taking the first bet 9/10 times they’re a winner, 10% of the time they go completely bust and may be unable to recover. That changes the calculus and makes the second bet look much more preferable since they are at least able to continue ‘playing’ after that point.

          These extra conditions can’t easily be tested in lab conditions, so Taleb is claiming that humans developed the psychological aversion to risk as a rational survival strategy that reflects hundreds of generations of learned wisdom (i.e. those without the risk aversion trait were removed from the gene pool).

          I don’t recall if he thinks it is okay to ignore risk aversion if you have the funds to handily eat any short-term loss, but I am sure he is saying that risk aversion is not a negative feature that should be purged from humanity.

          • Eponymous says:

            The standard theoretical framework is expected utility maximization, which nicely handles the example you give. If a $1000 loss puts someone on the brink of starvation, that means their utility function is very non-linear in the area covered by the bets. Risk averse behavior arises from the concavity of the utility function. This is exactly risk aversion, and is completely standard.

            You don’t even need to appeal to loss aversion, which is something else. That is used to explain experimental results that seem to be really irrational (violating the axioms of expected utility theory).

            I’m actually mildly skeptical of whether these studies generalize outside the laboratory; but that’s another matter.

          • Swimmy says:

            @Wesley Mathieu

            I don’t know of anyone who thinks risk aversion is irrational. It’s a perfectly normal feature of expected utility theory. It’s not the same thing as loss aversion.

            Despite the mass of misinformation on the internet (some of it spread by Kahneman et al. themselves), the bias of loss aversion is not that “people dislike losses more than they like gains.” It’s about reference dependence.

            Run an experiment with two groups of people. Give one group $10. Then, using whatever excuse, give them another $10.

            Give the other $30. Then, using whatever excuse, take away $10.

            Loss aversion is the observation that people in the second condition will predictably have lower utility than people in the first condition, even though they walked into the experiment not knowing how much they would win, and walked out with the same amount of money. They both won, but you added the feeling of a loss to one of them.

            It’s not “losses loom larger than wins,” which is bog-standard utility theory. It’s more like “framing a win as involving a loss makes it feel like a loss,” which is kind of weird. It says that utility is, at least in part, narrative-based. It doesn’t only matter how much you have and what you can do with it, it matters what kind of story you tell yourself about why you have it.

            Shame on behavioral economists for not making this more clear to the general public. But rest assured most economists aren’t so silly to think risk aversion is irrational.

          • sandoratthezoo says:

            Your second game is higher EV than your first game, even for someone who has absolutely no follow-on consequences for the bad case.

          • Janet says:

            If I’m understanding Taleb’s argument correctly, this is another example of the ludic fallacy. That is, within an artificial game, we can confidently and reliably run the math to determine that a 90% chance of winning $1000 is “better” than a 100% chance of winning $500. But Taleb’s contention is that the “real world” is never an artificial game, and there are always additional, tacit possibilities with undefined (undefinable) probabilities and outcomes. For example, the risk that the person offering the wager will renege on the deal (e.g. doesn’t actually have $1000 to give you if you win), or has rigged the game so that the “90% chance” isn’t actually true, or has an ulterior motive to manipulate you and cause you a worse loss in some other way, even if you do win $1000 right now, or that this will put you into an ongoing, socially awkward situation entirely separate from the financial question.

            So in the real world, it’s very wise to look at the “bird in the hand” as more than an expected value function– because you can’t actually produce a firm mathematical likelihood that the guy offering you the bet is a sweet-talking, lying sack of ****.

        • pjs says:

          > Risk aversion = curvature of utility function.

          That’s controversial. First, it’s perhaps more reasonable to suggest that a curved utility function explains risk aversion rather than defines it. But more important, it just doesn’t work sensibly to explain why many people decline modest bets (e.g. that someone not on the brink of starvation would decline a 50/50 lose $100 vs gain $110) bet. (E.g. http://faculty.som.yale.edu/florianederer/behavioral/Rabin_Thaler.pdf)

          So yes there are cases were curvature in ones utility function is all you need to explain risk aversion, particularly when the stakes are large with respect to ones net worth, but that leaves out an awful lot of risk-avoiding behavior in the real world.

          • Eponymous says:

            I was speaking somewhat loosely, but I think correctly in this context (e.g. if Taleb says “‘risk aversion’ is nonsense” I take him to be referring to expected utility maximization with a concave utility function.)

            That said I agree that expected utility maximization may not provide a complete theory of behavior in the presence of risk (though arguably it does provide a complete *normative* theory). Clearly there are deviations that can be shown experimentally; and there are plenty of “anomalies” in the data which may derive from these deviations (or perhaps not). Hence the field of behavioral economics/finance.

          • pjs says:

            I think you are suggesting that risk aversion is modeled well enough by concave utility maximization (“UM”) (modulo a reasonable and expected quota of deviations and anomalies) that it’s usually reasonable to conflate the two – in normal discourse. Is that fair? That the model is by and large descriptive (or perhaps “explanatory” is a better word, or to go far weaker, “not inconsistent”) and that the anomalies and deviations are just
            that – exceptions to an otherwise broadly useful baseline model.

            I question whether that is true any more. In particular, as regards typical investment decisions (not the sole topic of this thread, but a lot) how much behavior is not anomalous? If UM is left to happily explain only “ruin or near ruin” situations only (uncommon in everyday finance), it’s just not a good explanatory theory overall (normative maybe, but then we have no license to conflate risk aversion – as observed or in discourse about such – with UM).

            To make this more concrete… Suppose a wealthy but conservative person comes to you and says: “in any given year, I’m not willing to tolerate 50% chance of losing 1% of my wealth if the upside of that risk is gaining 1.01%.” One upon a time, this request seemed to be quite reasonable risk aversion as per concave utility maximization. After Rabin et al, we see that it’s (probably: haven’t run the math on this exact case) perhaps an extreme “anomaly” – simply inconsistent with any non-absurd concave utility function explanation. But if that’s not consistent with UM, where does this stop?

          • Eponymous says:

            @pjs:

            I think you are suggesting that risk aversion is modeled well enough by concave utility maximization (“UM”) …that it’s usually reasonable to conflate the two – in normal discourse.

            Not in “normal discourse”; in technical discourse, and particularly in the context in which Taleb uses it, i.e. “‘risk aversion’ in economists’ models is BS” (not exact quote).

            Suppose a wealthy but conservative person comes to you and says: “in any given year, I’m not willing to tolerate 50% chance of losing 1% of my wealth if the upside of that risk is gaining 1.01%.”

            That sounds like a completely reasonable level of risk aversion to me — relative risk aversion around 1. Log utility would give that, as you can check yourself: 1/2*(log(1.0101) + log(0.99)) < log(1)=0

            Actually, we can calculate the level of relative risk aversion that would make someone indifferent towards that gamble. Just take a second-order taylor expansion of the utility function. Comes out to RRA ~= 0.99.

  3. sconzey says:

    Still reading the review but I had to interject that the graph of the book reviews isn’t a probability distribution. You’d need to bucket the reviews, and then have “views” on the x axis and “frequency” on the y axis.

    You should fix this before Taleb notices and subjects you to an angry twitter rant. 😉

    • somervta says:

      it’s not intended to be a probability distribution, and in fact your proposed fix wouldn’t make it one. As it is I think it’s fine to call it a generalized distribution which roughly maps to a power-law function; technically for that to be right the x axis is the nth most popular posts, not the names of the posts, but that’s pretty obvious from context

  4. BlindKungFuMaster says:

    I read “The Black Swan” when it came out. I thought it would have been a great essay. Way too much ego for my taste.

    And Taleb seems to think that the bankers and financial experts are deeply invested in their theories and cannot accept his gospel because of ego. In reality these bankers get filthy rich using their theories while not carrying the risk. In his narrative these are the in-the-box-thinking nerds, while he is the brilliant intellectual. In reality these “nerds” are doing alright by themselves, it’s the incentives that are messed up and Taleb isn’t as brilliant as he thinks.

    • dahillauthor says:

      In reality these bankers get filthy rich using their theories while not carrying the risk.

      I was going to say something similar when Scott said “Banker 1 follows a strategy that exposes herself terribly to black swan risk, and ensures she will go bankrupt as soon as the market goes down, but which makes her 10% per year while the market is going up.”

      • arlie says:

        Based on evidence from 2008, banker 1 gets bailed out by the government, and gets an even larger bonus the year that she should have gone bankrupt 😉

        Unless of course she’s Icelandic.

    • sohois says:

      Taleb actually directly addresses this point, many times in his books – survivorship bias. “Bankers” get filthy rich because the losers are weeded out and with a large enough sample size there will always be some people who can make a ton of money with no skill, and only luck.

      • BlindKungFuMaster says:

        That some people think they can consistently beat the market but actually can’t, is a separate issue.

        I’m talking about picking up pennies in front of a steamroller, which is going to roll over somebody else. Like banker 1 in Scott’s review. These people also cannot consistently beat the market, but they can use strategies that make it pretty likely that they will make a lot of money in the short term, while their customers lose everything in the long term.

        Like doubling the stakes every time you lose in roulette.

        • Doctor Locketopus says:

          Yes. When the black swan event comes along, the investment advisor/broker/banker says “Oops. Looks like no bonus for me this year.”, while the customer loses all his money. To put it in Talebian terms, the advisor has no real “skin in the game”.

    • adreng says:

      I agree. Furthermore, it is not the case that academic financial theory really claimed that returns of most financial assets follow normal distribution. Certainly, there are many models that use normal distribution. Whether they are appropriate depends on when and how they are used.

      Many practicioners in the area of finance are more removed from theoretical research, and for some of them, the models based on normal distribution may have been all they knew from financial theory, and for such an audience, a book like The Black Swan is probably very beneficial. But it would certainly be completely wrong to claim that before that book and before the 2007/2008 crisis most financial theorists were convinced that the distribution of stock returns – even over shorter periods – is adequately explained by the normal distribution.

      In my view, there is something chameleon-like about Nassim Taleb. In his popular books, he pretends to be an enfant terrible of finance, completely outside the mainstream, and he uses very harsh words against academics and theoreticians. But once I attended a presentation by Taleb at a university where the audience mostly consisted of professionals and academics in finance. The presentation was not bad, Taleb clearly is intelligent and knowledgeable and has good ideas, but what he said also clearly was not outside the mainstream. He presented reasonable ideas about how stress tests should be done. The expectations of many people in the audience were frustrated – on one hand, Taleb hardly said anything bad or outrageous, but a comment I heard from several people was also that it was “not so special”, he just had a presentation many other competent people in finance could have had, as well, which did not fit his reputation so well. Taleb also wrote papers that are intended for a more specialized audience in finance, and there, as well, I would say that they are not bad, but they are not really outside the mainstream of financial theory – at least certainly not the way his books for a general audience pretend to be.

      One may argue that this is because the academic world in the area of finance has changed since the crisis 2007/2008, and to some degree, this is certainly true. I studied finance afterwards, and I don’t know how it was before. However, when I look at the theories and older articles and books that are quoted, it clearly seems that there was a shift of emphasis, but not really such a strong break, and for example, it had been clear all along that stock price returns do not really follow a normal distribution, and even many theoretical assumptions that are much more widespread than the idea that stock returns are normally distributed were challenged in articles all the time. So, with his more academic papers, Taleb is hardly more outside the academic mainstream than quite a large percentage of articles that appear in journals about finance. What his self-image of a rebel is mostly based on is not so much the content of his books, but the style of his popular books.

      Even recognizing that it had been known for a long time that stock returns do not really follow normal distribution, the question still remains why so many people used – and still use – models based on normal distribution so often, even in cases where it is quite clear than normal distribution is not even a good approximation, and therefore underestimate tail risks.

      I think there are mainly three kinds of reasons (maybe more):
      1. Convenience – the models based on normal distribution are simpler to implement, and since there are cases where they may be regarded as an acceptable approximation, there is the temptation to use these simple models also in cases where it is less appropriate. In the past, computational resources were also an issue – often, the models that are not based on normal distribution would have taken too long to calculate (often, there is no analytical way to get the results, and a large number of simulation has to be run), but with contemporary computing power, that should be less of a problem.
      2. Deeper philosophical issues related to why people allegedly prefer models with normal distribution and are attached to them. This is something mostly Taleb claims, I don’t think many other financial theorists write about this.
      3. Incentive structure – excessive risk taking is often rewarded. When large banks take large risks, they can make huge profits in good times, and in bad times, they are saved by the state because of the “too big to fail” problem. There are similar phenomena with employees who receive very large bonuses if their investments perform very well, but don’t have to pay negative bonuses when they fail (they might have lost their job, but the bonuses of bank managers could be so large that taking the risk of losing the job was probably often worth it). When taking excessive risks is rewarded, people prefer models that let them take excessive risks (so the choice of these models is an effect rather than the cause of excessive risk taking). These problems are often addressed in mainstream financial theory – Taleb also mentions them, but we certainly would not need Taleb for knowing about them, they have been researched for a long time – which, of course does not mean that they have been addressed adequately, but that is a political problem.

      Convenience certainly plays a role, though it hardly explains everything. I would guess that point 2 is hardly very relevant, but that point 3 – the incentive structure – is very relevant. My impression about Taleb’s books is that he stresses point 2 far too much and mostly neglects point 3. Probably, “Skin in the game” focuses on incentives more. I haven’t read the book, yet, but it seems that there, incentives are a crucial part of the content.

      • jgr314 says:

        I can comment on the thinking among finance practitioners prior to 2007:
        (1) LTCM (circa 1997/1998) was the a defining event in which pure academic/theoretical finance models had to confront the real world. There were actually several sub-stories within this theme. The most widely known of these stories is, broadly, that real world distributions aren’t the same as theoretical ones used for nice analytical solutions (whether normal or not). However, there were other practicalities that revealed themselves (the role of transaction costs, importance of tax considerations, some details that are even more inside-baseball related to government bond trading). Fortunately, most of us got to learn those lessons for (almost) free (we were exposed to the risk that their failure would take down the global banks and end the modern financial system, etc).

        (2) Traders/investors with “skin in the game” used simple approximation models as a reference tool that they could adjust based on their knowledge or intuition about how the real world differed from the model. For example, Black Scholes option pricing assumes lognormal distributions and constant vol. Everyone knew those aren’t true. However, B-S is such a simple model that we all understand it and it is amenable to reasonable kludges (which still leave it in a form that is understandable). I believe this is a reasonable approach, superior to most efforts to improve the approximation with a more complex (but less transparent) model.

        (3) The true black box models that were widely used were allowed because of incentive problems. The people who knew, didn’t care, and the people who should have cared didn’t know and didn’t have the skills to know.

        FWIW, the modeling problems of the Great Financial Crisis weren’t really linked to mis-modeling of equities.

        • Andrew Klaassen says:

          > FWIW, the modeling problems of the Great Financial Crisis weren’t really linked to mis-modeling of equities.

          And it turns out that the modeling of the mis-modeling was itself mis-modeled.

          The financial system freaked out when it realized how much was depending on subprime borrowers paying back their mortgages, but it turned out that subprime borrowers were pretty good at paying their mortgages. Multiple academic studies have found that most of the defaults ended up coming from prime borrowers who had taken on second mortgages to buy investment properties. This was a risk that wasn’t taken into account by either the initial securitized mortgage bundlers or the people who raised the alarm that led to the credit freeze-up.

          • baconbits9 says:

            Your link is misleading. “Even at the height of the housing boom subprime was only 20% of total borrowing” but prior to 2002 subprime was never more than 5% of the market, phrasing it the first way de emphasizes the shift in borrowing which is what is criticized.

            Secondly after the fact evaluations of repayment aren’t that relevant. The boom had two fairly distinct characteristics, the larger than usual share of subprime borrowers and the larger than usual share of adjustable rate borrowers. ARMs accounted for about 30% of the market from 2004 through 2007, after being less than 15% from 2000 through 2003. 30 year mortgage rates from 2004 to 2007 were between 5.5 and 6.5%, 30 year rates from 2009 (when 5 year ARMS written in 2004 would either see a rate shift or a refinance) to the present day have mostly been between 3.5% and 5%. This is a substantial decline in expected costs on these mortgages with some people getting a multi year 3 point rate reduction without actually having to pay refinancing fees. These securities ended up paying out less than expected due to the extremely low interest rate environment that came with the crash.

      • eccdogg says:

        I agree with most of this.

        Very shortly after the Black Scholes option model started to be used in finance people noted that you could not use one volatility assumption for all moneyness of options. Instead of saying “well the distribution has to be normal so those out of the money options are overpriced” professionals computed an implied volatility surface and assumed that the market was correct. A volatility surface implies non normality and volatility smiles were known long before Fooled by Randomness or The Black Swan.

        My finance training was in 2002-2003 and some of these issues were certainly discussed. However, I will say that my classes on mortgage backed securities left out obvious Talebian critiques that I personally brought up in class.

        • cryptoshill says:

          Almost any derivatives trader who is trying to analyze a market will immediately assume the market is correct – which is sort of a Talebian problem of leaning too much on empiricism and not learning enough theory.

          • eccdogg says:

            Maybe, but the point is that the market implied non normality and that is what folks adopted. Normality was not the assumption.

            But I don’t think you are correct, I think almost all traders take a view on assets that they trade and shade their portfolios accordingly except for folks who run a completely flat book (and it is almost impossible to run a book that is completely flat all the greeks and basis).

            And to the extent that market makers do assume the market price there is a very good reason for that. Because they are making their money on the spread. Market is the price that they can hedge or balance their book at. They try to buy at market- x and sell at market +x. In that situation their personal beliefs are kid of irrelevant.

      • TDB says:

        wrong to claim that before that book and before the 2007/2008 crisis most financial theorists were convinced that the distribution of stock returns

        Is that what Taleb claims, or does he claim that they assume the error terms in their models are normally distributed, and ignore any “anomalous” datapoints that might make them doubt their guess?

    • To be fair, he did write an entire book addressing this exact problem. Taleb dislikes bankers who privatize profits and socialize losses, even if they make out like bandits, because they don’t have skin in the game:

      If you do not take risks for your opinion, you are nothing. I have no other definition of success than leading an honorable life. Honor implies that there are some actions you would categorically never do, regardless of the material rewards. Honor means that there are things you would do unconditionally, regardless of the consequences.

      The highest ideal is to have ‘soul in the game’, which means you not only own the risk associated with your own decisions, but put yourself at risk on behalf of others, i.e. by making personal sacrifices for the sake of the collective.

      Jon Haidt has a similar line that the shift away from character towards ‘quandary’ ethics (abort this fetus, or not? Kill one person to save five?) was a mistake. We hardly ever run into trolley-problem-style scenarios in real life, so we don’t get to exercise our morality muscles. It also relies on bad psychology: “Trying to make children behave ethically by teaching them to reason well is like trying to make a dog happy by wagging its tail.”

      I find the ‘ethics as a personal aesthetic’ argument pretty compelling, in the sense that it broadens the scope of morality and leverages the better aspects of human nature. For example, I find myself a lot more inclined to do good things out of a sense of personal pride, rather than by following arbitrary rules, or running numbers to try and maximize utility. This seems to me like it would be a universal in human psychology, but I’m probably typical-minding. What’s the SSC community’s position on virtue ethics?

      • Irenist says:

        I think the “SSC view,” if there is one, is best represented by Scott’s review of MacIntryre’s “After Virtue,” where Scott gets exasperated by MacIntyre’s usual wooly vagueness.

        I don’t know that Scott or the SSC community have had occasion to engage with the works of more rigorous virtue ethicists like Rosalind Husthouse, Philippa Foot, or G.E.M. Anscombe.

        So unfortunately, MacIntyre’s “underpants gnomes” presentation of virtue ethics as just a matter of, “like, being virtuous, y’know?” is probably taken around here for being representative of aretaic metaethics generally.

        • The Nybbler says:

          That pinged my Sidles detector, but appears to be a false alarm.

        • I don’t really get how deontology and virtue ethics could be anything other than consequentialism in drag (how do you set the rules? how do you decide the virtues?) but they still seem like useful and practical heuristics for actually, like, doing something.

          I recently read Starship Troopers, and one of the non-odious bits that stuck with me was Mr Dubois’ lecture on the importance of instilling character and a sense of duty from the ground-up. I’m not sure to what extent this is just old dudes waxing lyrical about the good old days/lamenting the moral decay of youth, or whether there’s a real phenomenon whereby modernity is making people less virtuous.

          If you have any reading recommendations that are layman-friendly, I’d definitely be interested in learning more.

          • Hoopyfreud says:

            Strangely, I don’t get the confusion.

            A determination of virtues (or rules) has to be done definitionally; what does justice mean? What about happiness? It’s a framework, and one that probably fundamentally *can’t* be communicated – a qualia of the mind. You can work to *increase* justice or happiness, but there’s a lot of underlying philosophy you’ve done to decide exactly what it is you care about, or what class of actions you’re trying to [inhibit/enact], and that philosophy is, to me, much more central than the drive to increase your chosen metrics quantitatively.

          • Irenist says:

            For some values of “layman,” Foot’s “Natural Goodness” is, er, good.

            Shorter and entertaining, but not systematic, is Anscombe’s curmudgeonly essay “Modern Moral Philosophy.”

            One important note is that not all virtue ethicists come to “old-fashioned” conclusions. While Anscombe was a devout and orthodox Catholic, Hursthouse argues from neo-Aristotelian premises that abortion can be the correct course of action in some cases.

          • Faza (TCM) says:

            To the best of my knowledge, for the Kantian deontologist, a rule needs to be consistent (non-contradictory) in a universal sense, in order to apply. Naturally, there are some axioms in play, that I’m not really qualified to list or discuss, because I’m not a deontologist, Kantian or otherwise.

            Funnily enough, it’s not impossible to argue that consequentialism is deontology in drag, by pointing out that it depends on value judgements that are not, themselves, derivable from consequentialism (you can only compare the goodness/badness of particular outcomes after you’ve defined what it means for an outcome to be good or bad). Even the assumption that “we should always pick the superior outcome” sounds suspiciously like a “moral duty”.

      • Ghatanathoah says:

        Basing morality on pride rather than on love or compassion has its own set of problems. In particular, it can lead to what this article calls “Thornton Melon Morality,” where rather than trying to better oneself, one instead seeks out examples of exceptionally immoral people to feel superior to. In a large society with advanced communications society, finding such examples is quite easy.

        In a large and diverse society it is also possible to have different and competing standards to about what one should be proud of vs what one should view as basic decency (ie stuff that you should be ashamed of failing to do, but not be proud of doing). Chris Rock complains about this in his comedy bit “Black People vs. N****s,” in which he complains about people who take personal pride in doing things that Rock sees as basic human decency, such as avoiding prison and feeding their children.

        This type of morality is probably also the root of the bizarre assertion Creationists make that the theory of evolution encourages immorality. Their reasoning is “people behave morally because they are proud of the fact that they are better than animals, therefore if you assert that people are animals, they will lose all motivation to be good, since they are no longer able to feel superior to animals.”

        I’m sure a clever and committed virtue ethicist can find their way around these problems. A rather obvious solution seems to be to take pride in being better than one’s past self, rather than animals or other people.

        I personally have an aversion to motivating morality by pride because it feels more moral to make morality about other people, rather than about myself. I should help people because I care about them, not because I care about myself. Of course, you could argue that not being moral out of pride makes me even more virtuous, so I should be even more proud (or meta-proud?) in the long run.

        We hardly ever run into trolley-problem-style scenarios in real life, so we don’t get to exercise our morality muscles.

        This kind of misses the purpose of quandaries. We don’t run into controlled chemistry experiments in real life either, but the knowledge of chemistry we glean from them is useful day-to-day. Quandaries are supposed to help you understand what your priorities are in the abstract, so you can apply them better in the concrete.

        • I guess you could replace ‘pride’ with ‘duty’ or ‘love’ or whatever – it seems like having some intrinsic attribute you want to cultivate is likely to be more motivating than an external rules or calculation-based model, even if you use those tools to figure out what to do in the first place.

          To give another example (again, without suggesting it necessarily has any broader applicability) I knew for a long time that eating factory-farmed meat was Bad. It took me years to change my behavior, and I’m pretty sure the only reason it stuck was because it became some (very small) part of my identity. I’ve noticed smart and rational people noticing this in themselves quite frequently – they know perfectly well they ‘ought’ to do something, and they can quote the theory chapter and verse, but they don’t necessarily follow through.

          Quandaries are supposed to help you understand what your priorities are in the abstract, so you can apply them better in the concrete.

          I think the point that Haidt and co are making is that there’s a whole lot of abstract, but it doesn’t always lead to much concrete. I learned the basic principles of moral philosophy in college, with all of the cute thought experiments, then didn’t give them a second thought for years. Obviously it’s not an either-or, but perhaps the pendulum needs to swing back towards practical philosophy a little? (EA seems like a promising example of combining the best of both worlds.)

    • Victor says:

      I had a similar ideas when I read “The Black Swan” when it came out. I was studying physics and was really interested in the book based on what I heard about it.

      After reading it, I though: “so what?”. I was not shocked at all and I though the book just stated the obvious, that there are things that we cannot predict. I mean, even in a normal distribution a Black Swam can happen, and it doesn’t mean that the event doesn’t follow that distribution. I remember I guessed that, maybe, in the financial world that Taleb came from, they didn’t really understand statistics and so the book was really important due to that.

      I wrote in the time about the book in a blog I had, and for me the book was basically saying: “there are unlikely events that we cannot predict and will cause crisis and disruption”. I don’t think you need 400 pages to state that idea. However is true the book is fun to read, more or less. Also, the success of it probably is related with the fact that a lot of people don’t really know about statistics (even I am not an expert, I have some solid foundations) and the book are telling them “forget those experts that tell you this and that, they don’t know sh*t either” and people love to being told that they, actually, are more “intelligent” that those “nerds” that go around with their knowledge and ego.

      After that, few years later already a physicist and working in software development, I read it again to check my ideas, as I though that I might be to naive in the time. I would say that I maintain my take: the book is interesting for someone who doesn’t know statistics and such, but it does not develop any new idea or concept, apart from defining clearer the concept of black swam (which I think is the big contribution of the book).

      But in general, it is about saying that experts know nothing because they act as everything is predictable, but there are events that we cannot predict. Which is inane, because if we cannot predict… what do we do? Nothing.

  5. Markus Ramikin says:

    > Drug 1 has a side effect of mild headache in 50% of patients. Drug 2 has a side effect of death in 0.01% of patients.

    Hm, which side of the dust specks debate were you on again?

  6. dahillauthor says:

    I really must stop threatening to beat critics with a bottle (especially teenagers). Now I know why I get so few reviews…

    I wonder how that author saw it playing out: Option 1, beat the critic with a bottle, tell the world, discourage other critics (good), go to jail (bad). Option 2, beat the critic, keep it secret, get more critical reviews (I suspect the criticism was richly deserved and cut a little too close to the bone), rinse, repeat. Or perhaps it was Option 3, which is Option 1 with the rider that there’s no such thing as bad publicity (in which case, accept the one star review as part of your marketing strategy.)

    • nobody.really says:

      People misunderstand that incident.

      Yes, the author was furious–but not with the reviewer; with the wine selection. The author wasn’t smashing an infuriating reviewer’s head with a bottle of wine. Rather, he was smashing an infuriating bottle of wine on the nearest object. Just bad luck that the closest object was the book reviewer.

      Kinda a black swan incident, no?

  7. Thomas says:

    So, anyone you know follow this investing advice? I don’t own any Treasuries, and I don’t know anyone who does. I also don’t know how to identify the wild hare idea that might shoot the moon.

    I play the middle-mutual funds, and pick up the dollars before the steamroller. Held tough through the dot-com bubble and 2008. Maybe I should have put some money in Theranos?

    • melboiko says:

      I own a bunch of of another country’s equivalent of Treasuries (90% of my investments), and I bet some money in a managed fund of silicon-tech companies (10%). I didn’t know about the book, but I was following the same rationale; I’m essentially a laywoman when it comes to investing, and it just felt sensible at the time.

    • The barbell strategy (85% hyper-conservative, 15% hyper-aggressive) probably works well for Taleb, but when I looked into it I couldn’t help but think it was basically disastrous for small investors:

      For someone ‘big’ enough, the bets on the 10 to 15 per cent side make up for the stagnant 85 to 90 per cent in T-bonds or similar. Let’s say Taleb’s net worth is $5 million. That means he’s got ~$750,000 to spread around a diversified basket of highly speculative bets. It’s highly likely that at least one will pay off.

      For a small investor, that high-risk basket might only hold $10,000 or $20,000, which is nowhere near enough to play this game. Minimum buy-ins mean you’d only be able to place one or two bets; perhaps a handful at most. The chances of holding the winning lottery ticket are tiny, which means you’ll end up with nothing, while the rest of your money moulders away in the bank.

      I use a bastardized version of the barbell strategy for my own portfolio, with a split between passive index funds and high-risk speculative investments. It’s based on the same general principles of optionality (minimize downside risk, maximize exposure to positive black swans), but Taleb would no doubt disapprove, and bog-standard broadly diversified index funds still seem like the best choice for just about anyone (including me, if I’m honest!).

      EDIT: Something else that strikes me as weird about Taleb’s advice is that a passive index fund is about as ‘safe’ as it gets in the long run – there hasn’t been a 20 year period where stocks have fallen. Of course, this means nothing to Taleb, because black swans by definition haven’t happened before, but technically the exact same argument could be applied to fiat currencies or Treasury Bills, or whatever else you consider a ‘safe haven’.

      (Also, there’s a massive downside risk in keeping most of your money in cash – it’s just less obvious.)

      • J Mann says:

        Falkenstein argues that most research shows that out of the money options and other high risk investments are overpriced, if anything.

        • I’d be interested to learn more if you happen to have a link handy? I’ve read the Falkenstein reviews of Taleb’s work before but don’t remember the high-risk overpricing bit.

          • J Mann says:

            I posted a few links downthread.

            It’s too over my head to really judge who’s right, but Falkenstein seems like a pretty good steelman response to Taleb’s finance-based criticisms.

          • baconbits9 says:

            I can’t read these pieces as a steelman

            Taleb argues that the unpredictability of important events implies we should basically forget about all that is predictable, because that’s not where the real money or importance is.

            A steelman of Taleb (and using language from The Black Swan) this point would be along the lines of “Predictable events live in mediocrastan, and unpredictable events live in extremistan, and changes or errors in extremistan will dominate changes or errors in mediocrastan”.

          • baconbits9 says:

            Here is another, much more serious, example of straw manning from Falkenstein

            From Taleb’s Wikipedia entry circa July 2006, we see where Black Swan thinking goes when applied to an investment strategy:
            When he was primarily a trader, he developed an investment method which sought to profit from unusual and unpredictable random events, which he called “black swans.” His reasoning was that traders lose much more money from a market crash than they gain from even years of steady gains, and so he did not worry if his portfolio lost money steadily, as long as that portfolio positioned him to profit greatly from an extremely large deviation (either a crash or an unexpected jump upwards).

            This is incorrect, Wikipedia might have said it but Taleb has contradicted that in the footnotes of The Black Swan he wrote

            I specialized in complicated financial instruments called “derivatives,” those that required advanced mathematics-but for which the errors for using the wrong mathematics were teh greatest.
            The subject was new and attractive enough for me to get a doctorate in it.
            Note that I was not able to build a career just by betting on Black Swans- there were not enough tradeable opportunities.
            I could, on theo ther hand, avoid being exposed to them by protecting my portfolio against large losses.

            Cirticising someone’s work by taking a wikipedia claim at face value when criticizing a work that contradicts the point is at least sloppy, as it is the only quote in a reasonably long piece that warranted indentation it sounds suspiciously like cherry picking.

        • bizwacky says:

          I’m curious about this as well. I’ve also read that investments with high skewness (lottery-ticket like returns) like penny stocks and options have lower expected returns than the equivalent investments without that skewness. The explanation that I’ve heard is that basically, people are willing to pay for lottery tickets, which are negative ER, so they’re also willing to pay a bit more for investments with returns shaped like lottery tickets.

          • I wouldn’t be surprised if that were true, for any given class of fat-tailed investments. If it’s true across the board, then someone needs to tell venture capitalists they’re doing it wrong! Presumably the difference there lies in expert discernment/access to deals/added value, which again, makes Taleb’s advice pretty much useless for the average garden-variety investor.

          • baconbits9 says:

            You can have lower nominal returns and higher expected returns.

            For example I pay $1,000 for a put option to you every year on January 1st and you turn around and stick that money in the market. Years one through five the option expires worthless, and then year 6 there is a market crash and I cash out my put for $5,000 and put that $5,000 in the market. The simplistic view is that you are currently ‘up’ $1,000 on me, as I paid you $6,000 and you paid me $5,000 out. A slightly better version is that you are “up” $1,000 plus gains from having that money in the market for 5 years. The correct answer though depends on what happened after I put the money in the market. I got a lump sum that was conditional on the market dropping a substantial amount, and there are certainly times where putting $5,000 in all at once out preforms putting $1,000 in 6 separate chunks.

            Following our hypothetical you might have invested $1,000 in Jan 2003 when the S&P was around 900, 2004 at ~1,100, 2005 at 1,200, 2006 at 1250, 2007 at 1,400, and 2008 at 1,450 meaning your average purchase price is (ignoring dividends etc) a little over 1,200. I instead get $5,000 to invest at the end of 2008/early 2009 when the market is around 950. Come 2018 my $5,000 purchase in 2008 is worth a (with the S&P at 2,900) little over $15,000. The $6,000 averaged at 1,200 is worth a little under $15,000.

            NUMBERS FOR ILLUSTRATIVE PURPOSES ONLY!

          • Protagoras says:

            This seems interesting and relevant. It describes a strategy of selling puts and investing the proceeds plus additional funds to cover the potential losses in short term treasuries (liquid assets that can easily be sold if the puts have to pay out), and apparently over the past 11 years at least (the period over which the strategy was tracked) it produced both higher returns and lower volatility than just investing in the underlying stocks. Which suggests that at least over that period put options were indeed overpriced (though given the shortage of alternatives for some kinds of risk hedging, that wouldn’t mean they are never worth buying).

          • baconbits9 says:

            @ Protagoras

            Thanks for that find. It looks like the put index was outpreforming the s&p from 1986 through 2015, earning 9.9% to 9.5% annualized. Eyeballing the S&P chart makes it look like the S&P should have out preformed since then with several down months that look like they would be payout months, and a strong vertical climb which is where the Put Index should under-preform.

            This is an interesting index for sure.

          • Andrew Klaassen says:

            I’ve also read that investments with high skewness (lottery-ticket like returns) like penny stocks and options have lower expected returns than the equivalent investments without that skewness.

            I suspect, without any research to back me up, that this is something which changes over time as investment fads, financials conditions, ease of access to markets, and available financial instruments change.

            Over the very long term what you’re saying is quite possibly true, but over the very long term we’re all dead, etc.

    • Eponymous says:

      Treasuries are not risk free.

      Alcor would be a good example of a positive black swan bet. As a customer, not an investor.

      • HeelBearCub says:

        And if you really believe in the possibility of black swans, exposing the vast bulk of your portfolio to a single black swan seems … antithetical to thinking you can’t predict black swans.

        • TDB says:

          It’s a hedge. You don’t care what happened to the 85% if the 15% does well enough, and vice versa. You have your trained monkey pick up those nickels in front of the steamroller.

          This of course assumes that you’ve chosen wisely enough that you can’t lose both at once. It’s not a prediction, it’s an insurance policy. (Oops, that’s a bad metaphor, since insurance is based on being able to count on the actuarial tables predicting populations better than you can predict your individual outcome. But it is insurance in the sense of “this will bail me out if that goes badly”.)

    • Chalid says:

      Most strategies that resemble “picking up pennies in front of a steamroller” are systematically profitable even in the long term. Picking up the pennies is selling insurance, basically. For example, buying out-of-the-money options costs you money but has a small probability of a large return, and selling those options gets you money up-front but has a small probability of a large loss. On average, options sellers make more money than buyers.

      As with almost any systematically profitable strategy, you can explain this in “behavioral” or “rational” ways. The “behavioral” explanation is that people systematically overestimate the likelihood of unlikely events. This tendency is well-documented in e.g. prediction markets and academic finance calls the resultant pricing anomaly the “lottery effect” for obvious reasons. The “rational” explanation is that in crises, the value of an extra dollar is more than it is during “normal” times, so a strategy that makes money most of the time but loses money during crises is worse than a strategy that makes the same amount of money on average, but does well during crises and badly during normal times.

      This seems incompatible with Taleb and I’m not sure how he reconciles the literature on this with his investment suggestions.

    • Matt M says:

      If you have a high yield savings account (including money market) you almost certainly own, by proxy, something roughly equivalent to treasuries.

    • Robert Jones says:

      I’m 10% in Treasuries (well, gilts actually, because I’m in the UK). I am not qualified to give investment advice, but I believe the usual advice is to spread your investment across asset classes. People also usually increase their bond holdings as they approach retirement.

      • cryptoshill says:

        A few of the best arguments against asset class diversification:
        1. You want all of your money in the highest performing assets.
        2. Buying an asset for the purpose of “diversifying” encourages you to engage in asset markets you have no understanding of.
        3. Asset class diversification doesn’t necessarily mean it significantly changes your risk profile. For example – given the correlations between US housing prices and US stock prices – “diversifying” by buying rental property doesn’t effectively reduce the exposure of your portfolio to a downturn in the US economy as a whole that much. If you were trying to reduce your exposure to downside risk in the US stock market you could be more efficient by taking a short position on a company you don’t like. Of course – doing those things is *also* exposing you to critique 2.

        That said, housing markets are weird and there is some personal security that exists in owning a home (At a bare minimum if you come up short for a few months it takes a bank 90 days to kick you out).

        • baconbits9 says:

          (At a bare minimum if you come up short for a few months it takes a bank 90 days to kick you out).

          This is largely true for renters as well, though with different timelines and processes.

          Home purchases are not investment, they are consumption, and so you aren’t diversified much by owning your own house (they do come with some optionality, but that optionality is mostly when the rest of the market is going up, and they cost quite a lot to land that). Owning a rental can be an investment, and rents move more slowly than housing prices, giving you some diversification, but knowledge is needed.

          • nobody.really says:

            Home purchases are not investment, they are consumption…. Owning a rental can be an investment….

            Oh, how foolish of me: I bought a house to live in. And my neighbor did likewise.

            But how fortunate that we live in a subdivision full of identical houses. I’ll simply sell my house to my neighbor and buy his. Now I can rent my house to him, and he can rent his house to me, and voila! We’ll magically transform our act of consumption into an act of investment–and we won’t even have to move!

            Of course, when I was living in the house I owned, I was effectively renting the house to myself—with the benefit of receiving those (implied) revenues tax-free. Now that I’m renting the house I own to my neighbor, and vice versa, we each continue to enjoy the same housing benefits as before—but we get the pleasure of paying taxes on the revenues (net of expenses). And when we sell our houses, we won’t get the benefit of certain capital gains protections for homeowners. So, yeah, we may be poorer—but we can feel so much smarter for having made an investment rather than merely engaging in consumption.

        • jasmith79 says:

          This makes no sense.

          The whole point of asset class diversification is that it prevents a massive devaluing of that asset class from wiping you out. Having all of your money in your best performing asset? That’s daring the universe to screw you. Dinosaurs followed that “overfit to the current phase of the cycle” strategy. Worked great for millions of years… until it didn’t. Bernie Madoff wiped people out because his fund was their best performing asset (“can’t get returns like that anywhere else in this economy”) and they put all their money into it… until it wasn’t. The central point of the entire book under discussion is that you don’t know as much as you think you know, and should make decisions with a sense of epistemic humility.

          Naive optimization like that is exactly what Taleb is (repeatedly, almost to the point of being boring) ranting against.

          • idontknow131647093 says:

            Optimization ala the Dinosaurs is going to be good for the average fellow for millions of years though.

        • bizwacky says:

          Reasons 1 and 3 don’t really make sense to me. Say you’ve got two investments, one is higher expected return and lower variance than the other, and the returns between the two are highly correlated. You’d still get a diversification benefit from holding a little bit of the worse asset, as long as the correlation isn’t 1. That’s the idea behind all the advice you see to buy index funds, rather than buying individual securities. A diversified portfolio can always dominate an undiversified one, at any given level of risk.

    • eccdogg says:

      It is impossible to know for Black Swan reasons (there could always be a huge calamity that has yet to happen) but in general very tailish options usually look very overpiced.

      There is a trader slogan I have heard that is perhaps a bit crass but gives you an idea about how traders view selling well out of the money options “Sell a teenie, lose your weenie.” Consequently the traders try to price those things very expensive. So my guess is a barbell strategy is not a winner at least in the time horizon for most folks investments.

    • raj says:

      I’d imagine there are a fair amount of bogleheads here? For those who aren’t aware, it’s an investment philosophy predicated on the idea that nobody can really beat the market (well borne out by research), so paying financial advisors is just throwing money away. Further, since the market is a random walk biased upwards, there’s no sense in trying to time your investments with market corrections. The correct time to invest is always *right now*. So you should buy and hold the lowest expense ratio index funds according to your risk profile. This strategy has the added benefit of requiring basically no micromanagement.

      I have most of my money in vanguard stock ETFs/funds (.04% expense ratio). I believe I can pick up enough pennies to offset being steamrolled once or twice.

      • Robert Jones says:

        I have not heard it so described, but this is essentially my investment philosophy.

      • ReaperReader says:

        Buy offshore funds too. If your home country economy tanks your risk of losing your job and your home will go up significantly so having money offshore is a natural hedge.

        This may not be so useful for Americans. The USA is big.

    • Tenacious D says:

      Putting some money into Bitcoin pre-2016 would have been a good example. Does anyone know if Taleb did?

      Currently, I’d like to give equity crowdfunding (e.g. Wefunder) a try, though it probably doesn’t fully fit the criteria for the 15% side of the barbell. I’m skeptical anyone raising money this way has true black swan upside potential, but the variance is definitely higher than say an index fund.

    • nobody.really says:

      I follow a modified version of Taleb’s investment portfolio strategy.

      First, I have a mortgage.

      Second, I have a pension.

      Third, I have an employed spouse.

      Finally, I (and my spouse) invest some amount–a small share of our total assets–in the stock market. I suspect Taleb would conclude that we’re under-exposed to risk, but that’s the portfolio du jour.

  8. A1987dM says:

    Drug 1 has a side effect of mild headache in 50% of patients. Drug 2 has a side effect of death in 0.01% of patients.

    Had you said “1%” you’d have a good point, but around 0.01% of patients will die by natural causes in 4 days

  9. andrewducker says:

    A naive empiricist will judge them by their results, see that Banker 1 has done better each of the past five years, and give all his money to Banker 1, with disastrous results. Somebody who has a deep theoretical understanding of the underlying territory might be able to avoid that mistake.

    That’s a _really_ naive empiricist. One that only looks back over a tiny amount of time, rather than looking at a graph of the stock market over more than, say, 8 years, and see the repeated rise/fall it follows.

    • The idea that we should systematically review data rather than rely on a gut instinct that might be subject to recency bias looks, on its face at least, like the sort of systematizing that Taleb is telling us to distrust.

  10. MH says:

    Platitude looks like it comes from the French word for flat/vapid/etc., so there might be a link there. But it would have to come through Latin.
    From the history we have available (the not-always-entirely-reliable Diogenes Laertius) ‘Plato’ (Platon) was called that in the ‘broad’ sense of the word, because the guy was broad shouldered. It was a nickname from when he was younger, given to him by his wrestling coach. We aren’t sure of what his given name was – maybe Aristocles. But it doesn’t look like it was his actual name. It was a known name at the time though.

    • Doctor Locketopus says:

      > But it would have to come through Latin.

      From PIE *pleth₂-. It’s also seen in words like “plate” and “platform”.

    • Nietzsche says:

      I was about to post the exact same thing.

    • Protagoras says:

      Aristocles seems to have been his grandfather, and there was a tradition of frequently naming the eldest son for his grandfather. But Plato wasn’t the eldest son, which is one of the reasons many scholars suspect this is one of the cases where Diogenes Laertius was just making things up and he really was named Platon by his parents.

  11. Douglas Summers-Stay says:

    I am reading the first section as a mild parody or satire of the cherry-picking that Taleb is doing when he talks about black swans. I hope that is what was intended.

  12. phoniel says:

    Here is my beef with Taleb, that I’ve been waiting a long time to write up. I’ll try to be short and sweet.

    Taleb talks a lot about the perils of ignoring unseen evidence. He uses the example of a naive scientist who notices that most gamblers started out lucky, and concludes that there is such a thing as beginner’s luck. This scientist does not realize that all the gamblers who started out unlucky stopped gambling, and thus removed themselves from the dataset. He ignores the unseen, and a wrong conclusion follows. A lot of Taleb can be boiled down to the commandment to respect what is the unseen. As Hamlet put it, “There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy.”

    So here’s my problem with Taleb: imagine a scientist comes up with a bell-curve-using fire-suppression-system that successfully predicts and counteracts office fires in 99% of cases. Then those 99% of cases instantaneously become “unseen” evidence. We will only notice the 1% of failures, because when the model is working, it’s invisible. On the hundredth occurrence of a potential fire, the system will break and the office will burn down, and Taleb will jump on the table, shouting, “I told you so! The Bell Curve is a lie!”

    If you write an entire book about Black Swans — which you define to those events which our models cannot predict — then of course you’re going to arrive at the conclusion that our models are useless. This tells you nothing about our models’ performances relative to the alternative, which are no modes. To put it another way, our model might be wrong, but if Taleb doesn’t buy it, his building has a 100x higher chance of burning down.

    The way I square this with my image of Taleb is by assuming that Taleb has already thought of this counterargument, and just wants to propagandize in favor of more epistemic modesty, with the goal that we consciously build systems to be anti-fragile to model failure. But that makes a lot of the book pretty boring to me, since I can only enjoy it for its rhetorical value.

    At one point, Taleb says something like, “The financial system lost more money in 2007 that it had gained in its entire existence.” (Am I remembering right? It seemed crazy and it was pretty quickly mentioned and forgotten.) This is the sort of statement that I want more of, because it’s the only way of knowing whether we were better off with the financial models or not. I really wish he’d talk more about this sort of thing.

    • Luke Perrin says:

      The financial system lost more money in 2007 that it had gained in its entire existence.

      I don’t know if Taleb said that, but I don’t think any interpretation of it is true. For example the stock market crash in 2008 only put the S&P 500 back to its 1997 level, and its 1997 level was much higher than its level when it opened in 1957.

      • moridinamael says:

        I’m curious to understand how that could possibly be true. Perhaps it has to do with where he’s drawing the borders of “the financial system”?

        • Eponymous says:

          Maybe he was talking about the profits of a set of major financial institutions?

          Certainly a number of financial institutions went under, and a lot more would have without a bailout. That seems consistent with his claim.

          • JASSCC says:

            I think we definitely have to include the cost of the bailout, and financing the bailout, and the hidden cost of future bailouts necessitated by the TBTF phenomenon now being quasi-offiicial, and by any sequelae that may ensue to the Fed growing its holdings so enormously. Taleb routinely decries Obama costing the US trillions to save the finance industry. Here’s an example:

            https://twitter.com/nntaleb/status/800300732222152704

            The cost is spread around and could be paid down via taxes, but basically it’s a huge net transfer from ordinary people to the finance system, according to Taleb.

          • beleester says:

            AFAIK, TARP (the bank bailout program) made more money than it cost, so I’m not sure in what sense it “cost the US trillions.”

          • baconbits9 says:

            TARP funds were basically paid back by the American Reinvestment and Recovery Act, treating one bailout program as an independent action won’t get you a good understanding of the situation.

          • JASSCC says:

            TARP was a small portion of the bailout.

            https://www.pbs.org/wnet/need-to-know/economy/the-true-cost-of-the-bank-bailout/3309/

            “But it turns out that that $700 billion is just a small part of a much larger pool of money that has gone into propping up our nation’s financial system. And most of that taxpayer money hasn’t had much public scrutiny at all.

            According to a team at Bloomberg News, at one point last year the U.S. had lent, spent or guaranteed as much as $12.8 trillion to rescue the economy. The Bloomberg reporters have been following that money. Alison Stewart spoke with one, Bob Ivry, to talk about the true cost to the taxpayer of the Wall Street bailout.”

            If the bailout of TBTF institutions sets a new norm, then it needs to be treated like what it is — an insurance program. The insurance program needs to charge premiums that cover the costs, not simply diffuse it across the whole economy so that everyone subsidizes the TBTF institutions in tiny, invisible and constant payments.

    • HeelBearCub says:

      I think the econ talk for this is:
      “It takes a model to beat a model”

  13. bullseye says:

    Regarding the doctors and mammograms: Are they really doing the correct procedure without statistics? Or are they following instructions written by someone who does know statistics? It seems to me that the mammogram machine would have to come with training in how to use it and interpret the results.

    • Luke Perrin says:

      By the way, there’s no “mammogram machine”. A person (or two if you’re lucky) just looks at the x-ray and decides whether it looks like cancer or not.

    • fnord says:

      I mean, “follow well established best practices” is an important way that people proceed in the absence of a detailed model of the underlying domain. So it is, in some sense, still a valid point.

      But, yeah, the fact that doctors achieve this by following guidelines established by people who do know statistics is also relevant.

    • Douglas Knight says:

      They are doing the wrong thing, exactly because they don’t understand statistics. Which is why people put a lot of work into convincing them not to do mammograms.

  14. onyomi says:

    Slightly tangential, but I was recently reading about some early Daoist theories of diet, health, and spirituality that threw into relief for me how easy it must have been, and, to a lesser extent, probably still is, for very smart people to build a completely erroneous paradigm on what seem like solid inferences about observed, empirical phenomena:

    The early Daoist theory was that everyone needs energy (qi) to live and one obtains energy in a number of ways: from the air (breathing), from food and drink, and from ingesting medicinal substances, like herbs and arsenic (one can understand how one might that think that e.g. consuming caffeine-type alkaloids gave one “energy” and ingesting small quantities of arsenic was also a fad in the West because it has a stimulant effect and causes a rosy complexion).

    Of these three “fuels” air is most “clean burning,” herbs and medicines second-best, and regular food and drink third (after all, they do rot, to some extent even in your colon). Therefore, if one could develop the capacity to depend ever more on medicines and breathing and ever less on food, finally arriving at a state where no food at all is necessary, then one should be immortal… right?

    • melboiko says:

      I like the one where the evil femme-fatale Empress lived for centuries by draining the vital energy of young men through common sex.

      Men seem drained and exhausted after orgasm, so clearly women must be sucking their vitality. The mechanics even look like they’re eating something extracted out of the male.

      • Bla Bla says:

        That’s why some daoist holy men recommended as a path to health and even immortality having sex with pre-pubescent virgin girls, but without ejaculation.

      • sketerpot says:

        The obvious problem here is that they didn’t even consider the competing hypotheses. For example, men’s vitality could be sucked out by invisible voyeuristic lizard ghosts. Or the phenomenon could be a blessing meant to promote a virtuous cycle of sleep and regularly-scheduled sex. Or it could be the work of a sorcerer with a weird sense of humor and a talent for casting plausibly-deniable curses. All of these explanations would produce the same effects, and what evidence did they have to favor one explanation over the other?

        The under-appreciated moral of the story is that hypothesis space is huge, and almost all of it is wrong.

        • onyomi says:

          I think failure to consider competing hypotheses is part of it, but to be fair, many of the more correct hypotheses probably would have required even more fundamental adjustments to their basic, seemingly not in need of questioning, worldview. For example, I don’t think most actual theorists of Daoist sexual health believed the woman “sucked energy” out of the man during sex, but rather that ejaculation depleted a man’s energy, period (surely they’d notice, like the NOFAP reddit, that you can feel tired after masturbation too). On top of this were additional folk theories like nocturnal emissions being the possible result of succubi.

          But anyway, so the theory is ejaculation drains the energy, while sex without ejaculation enhances it, both of which have a lot of subjective backing. Now maybe if they had been able to theorize an equivalent of the sympathetic and parasympathetic aspects of the nervous system a hypothesis closer to the reality might have entered their possible hypothesis space. But given that they were working with an “more or less energy flows throw energy pathways” model of the body, rather than a “nerves exist at lower and higher levels of excitation” model, it seems harder for them to imagine (they did have the idea that different energy pathways, responsible for different functions, were more or less active at different times of day, so they might have been on the right track there, but again deceived by a need to fit that model into an “yin-yang-five-phases” theory similar to Galen’s humours and based partly on the cycle of seasons).

          On the one hand, this story of Daoist discoveries and failures is encouraging to me in the sense that one can figure out useful, correct-ish things even when working with an incorrect model (and no model is perfect). On the other, I also increasingly feel like technology revolutionizes theoretical models better than theorizing: once you have the microscope, developing the germ theory seems almost inevitable; otherwise, the probability of it breaking through the many other seemingly plausible possibilities and staying there may be very low. But then again, maybe you don’t think to invent the microscope unless you expect to find something interesting by looking?

  15. Rowan says:

    The word for “hobby-cavalry” is “hobilar”. Or hobelar, depending whether I believe Wikipedia or Medieval 2: Total War.

  16. Eponymous says:

    I have a more specific worry about skeptical empiricism, which is that it seems like an especially dangerous way to handle Extremistan and black swans.

    The line of criticism following this sentence must be wrong, since Taleb makes exactly the same point in very similar language. So whatever he’s advocating here, it’s not what you take him to be saying.

    I think this is what he’s trying to say: suppose you observe a number of good events: picking up pennies, sunny days, or days where the farmer doesn’t cut your head off (if you’re a Turkey). What can you conclude from this?

    The answer is that it depends on your assumptions about the underlying data generating process.

    Standard statistics makes heavy use of distributions that have nice asymptotic properties. Then you can prove theorems like “as the number of observations goes to infinity, this estimator converges to the correct values with probability 1”, and you can put nice error bars around your estimates given the number of observations you have seen, and so on.

    The result is that a lot of standard statistical tools have these assumptions baked in. And therefore they’re baked into most analysis by analysts, whether in banks, in academia, or in the government.

    But of course, there are other statistical processes. And arguably these are the ones that produce the greatest share of the really meaningful events anyway. But we end up blind to their possibility because of our theories.

    You seem to be assuming that “be empirical rather than theoretical” means to use a sort of naive process of induction that ends up falling prey to this very error; but Taleb sees the assumptions of nice asymptotic properties (“mediocristan”) as the theoretical blinder, and that’s what he’s warning against.

    That said, I agree with you that Taleb is not terribly clear on precisely what alternative he is advocating beyond “don’t be blind to the possibility of black swans”.

    (Side note: Shouldn’t Taleb be a Christian? Pascal’s Wager (black swan) plus ancient tradition must be pretty convincing to him.)

    I would add that it does seem to be the case that many academic types tend to believe that things can’t be a certain way based on their theories, whereas practical people actually involved in the fields believe otherwise. An obvious example is the great number of economists who reject the possibility of bubbles in financial markets on theoretical grounds, despite the long history of things that sure look like bubbles. Another example would be most of the other social sciences, but I digress.

    • Gazeboist says:

      (Side note: Shouldn’t Taleb be a Christian? Pascal’s Wager (black swan) plus ancient tradition must be pretty convincing to him.)

      The issue with Pascal’s Wager is that it doesn’t tell you to follow (in Pascal’s original formulation) Catholicism. It tells you to follow as best you can a Romanized hybrid of Christianity, Islam, and possibly several other major religions.

    • Janet says:

      Taleb is an observant Orthodox Christian, and he makes much of how the periodic fasting and feasting imposed by his Church is much wiser than trying to find an optimum “steady state”.

  17. Eponymous says:

    I see Kahneman, Tetlock, Silver, and Yudkowsky as all being in the tradition of finding optimal laws of probability that point out why the doctors are wrong, and figuring out how to train doctors to answer probability questions right. I see Taleb as being on the side of the doctors – trying to figure out a system where the right decisions get made whether anyone has a deep mathematical understanding of the situation or not.

    I think you’re wrong about Eliezer here. I used to think as you do, but my view of him has changed a bit recently. I mean, don’t get me wrong, he definitely likes teaching people good probability theory; but I think that he’s more interested in getting the doctors to treat people correctly than anything else (“winning”), and he’s a lot more suspicious of explicit use of quantitative probabilities than the other people you group him with. I don’t think I would put him with Taleb as opposed to those people, but I would probably put him somewhere in between the two groups, or maybe as the sole element of an entirely separate cluster (i.e. not just a linear combination of the two, but including an orthogonal component as well).

    • Gazeboist says:

      That doesn’t really square with the sequences, though, where he generally prefers methods that are guaranteed to produce the best answer over methods that produce a good enough answer before opportunity costs become a factor.

  18. J Mann says:

    Eric Falkenstein is IMHO Taleb’s most interesting critic, and the fact that he hasn’t been attacked speaks well for Scott’s chances – I can’t see Scott getting farther out on the risk curve than Falkenstein. See eg here and here. He criticizes Taleb mostly from an investment perspective, so it’s possible that academics are as dunderheaded and inferior to Taleb as Taleb reports.

    There is an investment fund that is fairly associated with Taleb’s principles, although Taleb has clarified that he’s not running it – Universa. I’m not sure how well it’s done.

    • j1000000 says:

      I believe Falkenstein has talked about Taleb emailing his boss, and Taleb has a page up about how Falkenstein stalks him, which used to be a high Google result when you searched his name (I did once looking for his blog). So that seems as far as you’d want to go on the risk curve. Falkenstein, judging by his blog and twitter, is a pretty calm and reasonable guy; his great sin was that he disliked Taleb’s books and made it known.

  19. Robert Jones says:

    Instead of putting your money in “medium risk” investments (how do you know it is medium risk? by listening to tenure-seeking “experts”?), you need to put a portion, say 85 to 90 percent, in extremely safe instruments, like Treasury bills—as safe a class of instruments as you can manage to find on this planet. The remaining 10 to 15 percent you put in extremely speculative bets, as leveraged as possible (like options), preferably venture capital-style portfolios.* That way you do not depend on errors of risk management; no Black Swan can hurt you at all, beyond your “floor,” the nest egg that you have in maximally safe investments.

    This is really bad advice, and there’s a significant risk that some readers of the book will follow it, and it’s irresponsible of Taleb to say it.

    On conventional risk analysis, Taleb succeeds in constructing a portfolio which has a medium level of risk (i.e. variance), but which in most scenarios will limit the investor’s loss to 10-15% of his capital. But very few investors seek a particular level of risk for its own sake: most investors seek to maximise their expected return, subject to some level of risk tolerance. Because the return on treasuries is very low (barely above zero in real terms), the expected return on a portfolio which consists 85-90% of treasuries is also low. The overall effect is that most of time you lose 10% of your capital, you very rarely lose more than that, and occasionally you make a fat profit. I doubt there are many investors who want that behaviour.

    From a ‘black swan’ point of view, the portfolio is hopeless, because how does Taleb know that Treasury bills are extremely safe? (Is he listening to tenure-seeking “experts”?) Why should Treasury bills be immune to unknown risks? The market effectively prices Treasury bills as if the risk of US default was zero, not because the risk actually is zero, but because it’s impossible to model. You can’t reasonably be confident that a CDS would pay out in the event of US default, so you can’t even hedge against it. Like the possibility that we all die in a nuclear war, it’s too disastrous even to worry about. As a matter of fact, there are plenty of reasons to think that the risk of US default is quite a bit higher than zero.

    For exactly this reason, people who are really conservative about investing buy gold, which has zero yield, but which you might think will retain its value even in a scenario of total financial meltdown (or in the aftermath of a nuclear war). In fact my brother does hold gold for this reason, and he invests for a living. Of course, he doesn’t invest his clients’ money in gold, and there are good reasons why professional investors should be ultra-conservative in their personal investments (because they’re over-exposed to market risk through their remuneration). That still doesn’t deal with the black swan problem though: just because gold has been valuable for the whole of human history doesn’t guarantee it will be valuable in the future.

    Holding a fully diversified portfolio (which is conventional investment advice) also fails to protect against all black swan events, because who can say that there won’t be a black swan which simultaneously tanks all asset classess in all markets? Nevertheless, it does mitigate the risk as much as anything, because the risk of a black swan event adversely affecting a particular asset (like treasuries) must be higher than the risk of black swan events adversely affecting all assets simultaneously (because the latter implies the former).

    • Picador says:

      Came here to post this. Taleb’s “barbell” strategy sounds to me like: invest 90% of your money in the form of cash stuffed into your mattress; spend the last 10% on lottery tickets.

    • Deiseach says:

      It sounds like basic prudence: once you’ve limited your risk as much as possible (so only invest a small proportion in risky things, and only up to a limit that you are comfortable losing – don’t try putting all your life savings on the favourite in the 3:30 at Epsom!) then go ahead and engage in that risky behaviour, because you’re not jumping in blind – the risk is still there that you’ll lose your shirt, but at least not your shoes and trousers as well.

      I don’t know that this is such exotic advice that it requires an entire book and the invention of the new concept of ‘black swan events’ to justify it, but I have to say that from this review, Taleb sounds like someone more invested in style than substance – convinced of his own rough-hewn autodidact genius he spurns the consensus of those weedy swots in the ivory towers and slaps a new coat of paint on common saws then sells them as brand-new notions in the marketplace of ideas.

      That’s probably being terribly unfair, but the guy sounds like a massive pill (he makes a habit of being aggressively horrible to bad reviews? invents a revenge fantasy story of his self-insert being the seductive hunk and the cuckolds are his academic rivals? says nobody knows nothing but him and the few horny-handed sons of toil he patronisingly uses as Magical Negroes – ‘Fat Tony’? Really?)

      • The Nybbler says:

        Taleb isn’t saying “Save most of your money but go ahead and play the ponies with what you can lose” because you like playing the ponies, though. He’s claiming the equivalent (with “investing in high risk ventures” replacing “playing the ponies”) is actually the financially superior strategy, in particular superior to keeping it all conservative . That’s quite a bit different.

        (And “Fat Tony” is a funny name for a horny-handed son of toil; it’s more a mobster name, and not just on _The Simpsons_.)

        • baconbits9 says:

          He’s claiming the equivalent (with “investing in high risk ventures” replacing “playing the ponies”) is actually the financially superior strategy, in particular superior to keeping it all conservative . That’s quite a bit different.

          Taleb might be claiming that it is financially superior, but it is not central to his thesis that you will literally make, or expect to make, more total dollars with such a strategy but will benefit from avoiding ruinous situations.

          • beleester says:

            The goal is both to avoid financial ruin and to grow your investments enough that you can retire by age 65 or so.

            If you put all your money into treasuries except for some amount that you play the ponies with, you’ve avoided the risk of losing your shirt in the stock market, but replaced it with the risk that, if none of your high-risk investments pay off, your nest egg won’t be big enough to retire on.

          • baconbits9 says:

            The goal is both to avoid financial ruin and to grow your investments enough that you can retire by age 65 or so.

            If you put all your money into treasuries except for some amount that you play the ponies with, you’ve avoided the risk of losing your shirt in the stock market, but replaced it with the risk that, if none of your high-risk investments pay off, your nest egg won’t be big enough to retire on.

            While this is true now, it was less true recently. As I note elsewhere returns on long dated treasuries were far higher in the recent past. The 30 year Treasury yielded >7.5% for almost the entire decade of the 80s and was over 10% from 1980 through 85, and was well above 5% for most of the 90s. It is only very recently that their yields have low enough that retirement off them would be unthinkable.

          • meh says:

            @baconbits9
            Is that adjusted for inflation?

            The real reason for easy retirement planning in the 80s was probably pension plans.

            https://finance.zacks.com/difference-between-pension-plans-vs-ira-4048.html

            According to CNN Money, only about 10 percent of employers now offer pension plans, compared with 60 percent during the early 1980s.

          • baconbits9 says:

            @ meh

            Those are the nominal returns, as are stock market returns (typically).

          • meh says:

            Pretty meaningless for this argument then, no?

          • baconbits9 says:

            No, not at all. Why would that be so?

    • arlie says:

      But very few investors seek a particular level of risk for its own sake: most investors seek to maximise their expected return, subject to some level of risk tolerance.

      I don’t think so. Financial institutions model their customers as behaving this way, and ask questions to gauge their risk tolerance. So customers answer the questions and provide evidence that confirms the planners’ beliefs.

      But that feels like a way I’ve learned to think, or at least communicate, that doesn’t really model my actual desires. I’m having trouble putting my finger on the difference, except that my current thought process has a concept of “enough”, expressed in terms of lifestyle.

      How much $$$ I need to provide “enough” is subject to all kinds of unpredictable factors, which can lead to some behaviour that looks like straight up attempts to maximize expected value while controlling downside risk (what you say everyone wants). But that’s “means” not “ends”.

      Now maybe I’m an outlier here too, as I am in so many other areas. But then again, maybe I’m just a normal middle aged person – if a person with significant retirement savings can be considered remotely “normal” currently 😉

    • Eric Rall says:

      The market effectively prices Treasury bills as if the risk of US default was zero, not because the risk actually is zero, but because it’s impossible to model.

      Also because most of the scenarios where the US federal government defaults on its debt in a way that loses you a big chunk of your investment (*) are also scenarios where no investment in US-based assets or by US-based investors is safe. The search term is “Sovereign Ceiling”. If the federal government goes insolvent, then that trashes a big chunk of the economy (starting with all the banks and insurance companies that hold their legally-required reserves in treasuries), and it’s likely to come after a series of increasingly-aggressive measures to raise revenue have failed. The aggressive revenue measures themselves are likely to trash your investments (particularly if they include a substantial wealth tax), and the fact that they’ve failed say bad things about the US government’s state capacity in these scenarios. The best hedge against these scenarios is probably a balanced portfolio of canned goods and ammunition.

      (*) As opposed to a “technical default” where you get your check a few days late because Congress and the President got carried away playing chicken with the laws authorizing raising the money to make the debt payment.

      • Mark V Anderson says:

        The best hedge against these scenarios is probably a balanced portfolio of canned goods and ammunition.

        Yeah, I was thinking while reading these comments that based on Taleb’s comments (or at least based on Scott’s comments on Taleb’s comments), he should be a prepper. If his major concern in life is about disastrous black swans, then he should be putting a lot of his wealth in a home in the middle of nowhere, with lots of stocked food and firearms. He should also develop a lot of human capital on learning to survive in the case of various disasters.

  20. Nicholas Conrad says:

    If that many people misinterpret Taleb’s work, maybe the problem is Taleb’s writing? ¯\_(ツ)_/¯

  21. Wesley Mathieu says:

    Taleb’s later works flesh out his ‘character’ a lot more and help . Skin in the Game in particular, felt like his personal manifesto of how to solve (or at least mitigate) most of the world’s problems. But I mean that in a good way.

    “Skin in the Game” is also his explanation for going after book reviewers so hard. He believes they normally get to criticize authors even when misunderstanding the author’s work, and face no repercussions for being blatant “imbeciles.” By making it a point to go after every bad reviewer, Taleb is putting some of their skin in the game since they should be aware that misunderstanding the concepts will end up with the author popping up and whacking you with insults.

    To the extent this inspired Scott to be more cautious and thoughtful in his approach to this review, it seems to work!

    Skin in the Game also unveils another reason why Taleb favors practice over theory, the ‘traditional’ wisdom of common folk to the advice of ‘experts,’ and risk-takers to intellectuals. He identifies the concept of the “Lindy Effect” wherein the future life expectancy of non-perishable things is in large part determined by the length of time they’ve already survived. That is, the more shocks and stressors they have been through and not died, the more robust they are *proven* tobe and thus should be poised for long term survival. He points out this applies to ideas and advice as well. He points out that ideas that have to be shielded from criticism and thus could only survive within a university system but crash and burn when applied in the ‘real world’ are thus dangerous when taken as expert advice by policy-makers, CEOs, financial planners, etc.

    So whenever a ‘nerd’ generates a new model that they believe to be truly explanatory and useful for predicting some complex phenomenon, Taleb’s response will be “Bah! Get back to me in 100 years when it has been through some serious shocks and see if it still works!”

    Basically, when we have habits, traditions, and ‘ancient wisdom’ available to us that has stood the test of time for millenia, it would be stupid to toss it all out in favor of a shiny new theory, since we haven’t actually seen the that new theory works in practice. and it may expose us to new Black Swans that the traditional wisdom, by virtue of its long survival, was shielding us from.

    • Deiseach says:

      Taleb is putting some of their skin in the game since they should be aware that misunderstanding the concepts will end up with the author popping up and whacking you with insults.

      Because that has never happened in the history of reviewing. See what Scott said about Byron and the reaction to the Quarterly Review of Keats. Sorry, but a guy congratulating himself on re-inventing the wheel sounds less appealing by the minute to me. It also sounds in the same arena as the “our major advertisers are going to pull their ads if your review says their product is a steaming pile of crap, so whatever you do, say it’s great even if it is a steaming pile of crap” type of arm-twisting that goes on. ‘If you don’t want Taleb to make himself the bane of your life, for God’s sake say his new volume is a work of staggering genius, even if it’s only his old shopping lists and some quotes nabbed out of Brewer’s Dictionary of Phrase and Fable‘.

      The black swan notion does have cleverness to recommend it, but there’s also the problem that black swans are only outliers when all the local swans are white; if you’re in Australia, they’re the normal population of swans. So your black swan may, in fact, be normal or to be expected for this particular situation.

      • Wesley Mathieu says:

        I don’t think Taleb claims to be the first to use this method of punishing bad critics, only that its an example of Skin in the Game that works.

        And if you’re able to expect a black swan, that will remove it from the domain of ‘black swan events.’ He hits on the concept of ‘grey swans’ in “Skin in the Game” as a form of swan which can be anticipated but the actual size of the impact cannot be determined in advance.

        https://www.investopedia.com/terms/g/grey-swan.asp

        The true ‘black swans’ he worries about are the type that the system as a whole is vulnerable to and nobody inside a given system is able to see coming since the event was either completely unknown or was considered ‘impossible.’

        And as long as humans, particularly ‘experts’ and those in positions of power, choose to believe that they live in a world where important events are predictable and controllable and not, essentially, the result of random fluctuations in an unknowable complex system, the concept of black swan risk has not been sufficiently appreciated.

        Hence he goes on to suggest solutions for making systems more robust to survive black swans rather than suggesting “we should figure out how to get better at predicting black swans.” Because it is the exact attitude that “if we are smart enough we can predict and eliminate all risk!” that makes systems vulnerable in the first place.

        So stating the black swan notion has ‘cleverness to recommend it’ is to commit the exact kind of error that Taleb is so terrified of.

        • Deiseach says:

          The cleverality of the notion depends very heavily on “Hey, if someone asked you what colour swans were, you’d say white, right? Because all swans are white! Nuh-uh, turns out there are black swans!”

          But that only works if you’re talking to Northern Hemisphere people. Suppose you were talking to an Australian from the south-east: “Hey, what colour are swans?” “Well, round here they’re black” “Guess what? Some swans are – oh. You already know that”.

          So yeah it’s clever, but it’s not quite as clever as made out. “Things that your model does not incorporate because you have never encountered them before so you never built them in” is the longer-winded way of saying it, and it’s a good point worth bringing to attention, but the surprise factor is a large part of the point and in some situations what is a surprise to you may not be a surprise to him/her/them.

      • cmurdock says:

        Taleb’s defenders seem fond of saying “Yes of course he’s brusque, maybe even an asshole, but what’s important is that he’s right.” But the problem with such people is that it takes someone who is open to criticism to reliably, and not just intermittently, be right. Taleb’s strategy of interacting with critics only works so long as he is right, but if he is ever wrong about something and still responds to criticism by just calling the critic an IYI imbecile who doesn’t even deadlift etc., then his readers (who presumably would want to know the truth) are none the wiser.

    • Dan Fitch says:

      Sounds like Taleb’s later stuff wraps around to James C. Scott’s metis.

    • Andrew Klaassen says:

      That is, the more shocks and stressors they have been through and not died, the more robust they are *proven* tobe and thus should be poised for long term survival. He points out this applies to ideas and advice as well.

      In engineering, there’s sometimes the possibility of testing whether something which has lasted for a long time is close to failure: Check whether its stress-to-strain curves are getting worse. If an old bridge is dipping by 5 inches when a big truck goes over it when it’s only supposed to dip by 1 inch, it’s a sign of hidden weakness and impending failure.

      I idly wonder if there could be analogous tests for ideas.

  22. matthewravery says:

    I started reading this book when I was in my first few years of graduate school studying statistics. I never finished it. Part of that is because while I really liked it at first (“Yes! People don’t pay attention to tail risk enough!”) his criticism of, e.g., statisticians and “nerds” read to me like attacking strawmen. The stuff I was being taught in my basic-level courses on statistics and probability was substantially different from his descriptions of the orthodoxy he was railing against. We weren’t being taught that everything was Gaussian; we were being taught that, “Here are some models that rely on specific assumptions, and by the way here are some ways to check those assumptions (some sophisticated, some elementary) and here are things that can go wrong if your assumptions fail.”

    I got the impression that he’s mostly talking about folks in finance and then generalizing broadly outside his areas of expertise for rhetorical value. Also, he takes institutional criticisms and then levys them at individuals. Put that together with casting himself as the Solve Voice of Reason and Taleb ended up sounding to me like a more egotistical, less clever Feynman. (Which I guess can be read as a huge compliment.)

    The last thing I’ll add (and perhaps the most frustrating) is that I don’t recall Taleb talking much (at all?) about estimators. That is to say, a lot of what statistics is about is estimating things, be they model parameters, distributions of outcomes, whatever. And a lot of the math in statistics is about proving properties of certain estimators so that we can have (some) assurances that they are “good” in some sense. If Taleb thinks this is all wasting time because assymptotic normality is irrelevant to cab drivers or whatever, what’s his substitute? “Be empirical” isn’t an answer here because you can construct “empirical” estimators in as many different ways as you’d like, and some of them will be good (for a given set of circumstances) while others will be awful. How do you tell which is which if not by getting a bunch of nerds to do a bunch of math? And if you’re relying on methods that have “stood the test of time”, well, least squares regression has been around for over 200 years, but I wouldn’t apply it to financial problems.

    I mean, it’s fine to point out how specific estimators (or even a class of esitmator) can fail and to emphasize that these failure modes cover a higher proportion of scenarios than is popularly understood, but that’s considerably more circumspect than Taleb was being in Black Swan.

  23. Robert Jones says:

    It seems to me that three distinct modelling problems are being conflated here.

    Firstly, people rely on the central limit theorem (“CLT”) in circumstances where the conditions aren’t met. That’s just bad modelling. It has nothing to do with black swans, because we do know that CLT fails in the tails. You end up with a model which works fine most of the time, but is occasionally wrong by an order of magnitude. I suspect that often the people who build the model provide a warning that it only works most of the time, but the people using the model forget the warning after it’s worked consistently for 6 months. This may have been part of the problem with LTCM (although it’s hard to be sure because another part of the problem was the opacity of their models).

    Secondly, Goodhart’s Law means that trying to exploit systemic market behaviour changes the behaviour (which is just to say that EMH reasserts itself). This is what happened with CLOs: the very process of collaterisation altered the risk profile of the underlying loans (by introducing a principal-agent problem). This is a bit more like a black swan, because your model fails because of a historically anomalous factor which was unmodelled, but since the anomaly is your own behaviour, it’s a bit much to say that it was unpredictable!

    Thirdly, there are genuine black swans, i.e. events which are outside your contemplation, which you couldn’t possible model. The difficult is that knowing there are black swans doesn’t help you: you still can’t model them. So either you model based on what you do know and act based on the information you have (while maintaining some humility about what you don’t know), or you give up modelling entirely. The latter course sometimes seems to be what Taleb recommends, but it comes down to denying the principle of induction and therefore runs into the usual problem of radical skepticism: how do you even leave the house in the morning?

    • guardianpsych says:

      >how do you even leave the house in the morning?

      Knowing down to your bones that you’re leaving subject to unknown risks, modelling errors and falsehoods.

      Taleb is a very arrogant man making a very humble point.

    • moridinamael says:

      I am confused as to what counts as a Black Swan and what doesn’t. Investing in pharma companies is an attempt to profit by exposing yourself to a small chance of a high upside – which implies you have some rough estimate of the character and magnitude of that risk and that upside. A pharma company succeeding and making money for you is not a Black Swan, it’s literally baked into your model assumptions. A real Black Swan would be an alien invasion. Or the invention of a new drug that eliminates the need for any other drugs, ever. You don’t have a model for it, you can’t possibly have rough estimates of the numbers, and the possibility hasn’t even occurred to you. Am I wrong here?

      • Robert Jones says:

        Sounds right to me.

      • Wesley Mathieu says:

        I think that the point about Pharma companies as positive Black Swans is based on the idea that you can’t tell which pharma company will hit on a new miracle drug, what ailment it will be able to treat, and most certainly you can’t tell when they will hit on the new drug. All you know is that drugs for common ailments often end up making their creators a lot of money, once discovered, manufactured, and marketed.

        And it is entirely possible that no new miracle drugs that make the discovering company incredibly wealthy are going to be discovered for years or decades. The area of research is complex enough that it falls outside of mediocristan. Compare the steady, predictable pace of research in processer/chip design with reliable gains year over year with pharma research where a company can discover Viagra while exploring treatment for hypertension and nobody would have guessed it in advance.

        Basically you have an intuition/assumption that new miracle drugs are out there to be discovered. You want to have the ability to capture the upside if your intuition is correct. but beyond that, you could not possibly create a model that could predict which companies or which lines of research are most promising.

        If you had such a model, you could probably use it to just go and discover new drugs yourself.

        So in the case of pharma investments, you’re putting money into a system you don’t have a good model for, you can’t really estimate the numbers or chances of any particular company hitting on a drug, and the possibility *has* occurred to you but could be completely and utterly wrong. If there was literally no more new, gamechanging new drugs discovered in the next fifty years, that would be a surprising but possible outcome too.

    • Andrew Klaassen says:

      This is what happened with CLOs: the very process of collaterisation altered the risk profile of the underlying loans (by introducing a principal-agent problem). This is a bit more like a black swan, because your model fails because of a historically anomalous factor which was unmodelled, but since the anomaly is your own behaviour, it’s a bit much to say that it was unpredictable!

      I posted this higher in the discussion, but might be worth a repeat here: Pretty much everybody misidentified the historically anomalous factor which was posing a systemic risk. It turns out that prime borrowers taking out second loans to buy investment properties were the only class of borrowers who ended up defaulting in large numbers.

      • baconbits9 says:

        I don’t think this directly means what you imply, secondary properties are more susceptible to price declines than primary ones because you can default without losing your home and incurring all the extra costs that come with them. It can be simultaneously true that subprime lending caused the problem but that the contagion was felt to a greater extent in a different sector. This is a property (get it!) of leverage.

  24. Deiseach says:

    Having read this review, I have to say this: I’m glad you read this book, because now I don’t have to. None of this makes any of Taleb’s work sound appealing – hmmm, let me think: do I want to read a series of books by a guy in love with his own genius? No, I think I’ll pass!

    • Picador says:

      Hear hear! Thanks for taking this one for the team, Scott.

      • Eponymous says:

        It’s actually quite an enjoyable read. If you haven’t read any Taleb I recommend it.

      • andrewflicker says:

        I’ll counter Eponymous’s recommendation. Taleb is almost entirely correct in his analysis/point, and simultaneously an insufferable prick whom’s books were a displeasure for me to read.

    • J Mann says:

      The parts on the Lebanese Civil War are fascinating.

    • The Nybbler says:

      do I want to read a series of books by a guy in love with his own genius? No, I think I’ll pass!

      Good news: You can pass on Malcolm Gladwell too.

      • Deiseach says:

        I fully intend to, Nybbler, once I Google to remind myself who he is 🙂

        *returning after having done so* Yep, definitely one to avoid! In the same vein as nothing, including wild horses or the Archangel Michael, could make me read Harry Potter and the Methods of Rationality after the tiny taste of the work I’ve already had.

        This could be a whole new post series for Scott: I’ve Suffered For My Art So You Don’t Have To – Books You Can Now Safely Ignore, Having Read The Review.

  25. Picador says:

    In re “nerds”: I have to admit some sympathy with Taleb’s distaste for more or less academically successful people with a vastly inflated opinion of their own ability to successfully categorize and systematize the workings of the world.

    Case 1: I was at MIT in the late 90s, when the “extropian” nonsense started flying around. These guys were a bunch of Ayn Rand freaks who thought their big brains gave them super powers. They were incredibly obnoxious, and you couldn’t help but feel embarrassed for their future selves when they had to look back on some of the shit they said. Every time I hear something coming out of the “rationalist community”, present company excluded, there’s more than a whiff of this sort of extropian nonsense about it. I admire some of the rationalist stuff, but there’s a high “nerd” factor, as you seem to have discerned.

    Case 2: Academic economists. Extropians aside (and there is no doubt substantial overlap), these guys have the most inflated view of their own discipline and its predictive power that I have ever seen. Of course, they don’t get paid to be right, they get paid to provide cover for shitty fiscal policy, so of course with incentives like that it’s not hard to solve for the equilibrium, as they would say. Which is why they are a priesthood, like the soothsayers of imperial Rome or the astrologers of imperial China, rather than a scientific discipline. After my engineering and science studies in undergrad and grad school at MIT, I worked for a few years before going to law school. Almost every course I took there was enlightening, but I made the mistake of cross-registering at the up-and-coming school downtown for a Law and Behavioral Economics course to see what I was missing out on, and I was shocked by how shoddy the reasoning was. The prof was bad; the text was bad (Sunstein). THAT is a field for “nerds” in the Talebian sense: people who take a complex, highly uncertain and chaotic field of human endeavor, try to map it to a couple of trivially-disproved psychological and probabilistic models, and conclude that all the problems in the world are the result of other people (especially poor people, non-white people, and women) not being as smart as them and their nerd friends. I mean, why don’t poor people just invest their savings in blockchain-machine learning startups and then get out of the market before the bubble pops, duh.

    What I’m saying is: you’re dead at recess, nerd.

    EDIT: I should have mentioned that the “extropians” seem to have re-branded themselves as “singularity” cultists these days. Same difference.

    • moridinamael says:

      But big brains do give you superpowers. You can’t build a space telescope with the wisdom of the ancients.

      • Gazeboist says:

        Perhaps, but individual-intelligence-uber-alles types often forget that you can double the size of your brain by not insulting the person you’re talking to.

      • Picador says:

        Building space telescopes is exactly the sort of thing you can do with a big brain and a gift for academic study. The “nerds” I’m talking about are the ones who believe that their ability to build space telescopes must be equally applicable to solving problems in areas of human endeavour that have been the subject of literally millennia of intense contemplation by billions of other human beings, including some of the smartest people in history. More importantly, they don’t feel any obligation to read what any of those people have to say on the subject, assuming instead that once someone of their advanced intellect takes a look at the problem they’ll have it solved in a trice with the application of a little linear algebra and dash of biochemistry. Problems like how to relate to other people in a productive way, how to structure a society to ensure prosperity and happiness, how to live a good life — you know, the easy stuff around the margins.

        • Gazeboist says:

          Building space telescopes (that are powerful enough to produce new, interesting results) hasn’t been the sort of thing you can do with no more than a big brain and a gift for study since at least the 19th century. At least one person involved needs to have those traits, but they’re far from sufficient.

      • Deiseach says:

        Why are you denying our big-brained cosmic ancestors? How else were the ancients so wise, save that they were superior advanced civilisations and ancient astronauts with their very own space telescopes? 🙂

    • MikeInMass says:

      Academic economists … are a priesthood, like the soothsayers of imperial Rome or the astrologers of imperial China

      Reuven Brenner of McGill University makes this point at some length in a very worthwhile book, A World of Chance: Betting on Religion, Games, Wall Street. He touches on it briefly in this book review, noting that Johannes Kepler, who worked as an astrologer, admitted in his diaries that he “did not believe a word of his astrological analyses, but had to make a living.” I suspect our modern-day economists could benefit from a little more of Kepler’s skepticism.

    • Scott Alexander says:

      This is the kind of personal attack on a group that I try to avoid in the comments here (and also completely contrary to my own experience). Please consider this a warning and try to disagree more productively in the future.

      (helpful links partly explaining what I am against: here and here).

      • Bugmaster says:

        At the risk of being banned, I’ve got to admit that I sympathize with Picador’s point, if not his presentation or tone. There’s definitely a streak of intellectual hubris that often surfaces in communities of really smart people. It often leads them to assume that disciplines other than their own (such as e.g. machine translation, genetics, interpersonal relationships, etc.) are really simple, needing only a bit more intelligence (i.e., computing power) to solve all the major problems. They often persist in such beliefs, despite evidence to the contrary, which makes them come off as needlessly abrasive sometimes.

        To use a more personal example, I used to hang out with a guy who was much, much smarter than me. Our group of friends would play CCGs, and his decks were usually almost unbeatable. I would beat him about 2 times out of 3, and he would get extremely upset each time, claiming that I was cheating. Which I kind of was, because I basically built my decks specifically to do nothing except for neutralizing his. Somehow, he could never get over the fact that his mathematically optimal decks would lose to something so obviously inferior.

        • Picador says:

          Thanks Bugmaster, I’d forgotten about that particular xkcd comic. It makes the “nerds” critique far more succinctly than I did.

          • Bugmaster says:

            Despite what I said above, I wouldn’t apply this critique to all nerds. In fact, many alpha-male Wall Street types have the same exact intellectual attitude. Not all smart people are intellectually arrogant, nor are nerds the sole intellectually arrogant subgroup.

        • Picador says:

          In reply to your most recent comment: I’ve been putting “nerds” I scare quotes to indicate that I’m talking about Taleb’s idiosyncratic usage. By the conventional usage, I think it’s safe to say that anyone reading this blog or commenting on it, as well as Taleb himself, qualifies as a nerd. At minimum, I certainly include myself in that category.

          I think that, by Taleb’s usage, the Wall Street fin-bros you’re describing may or may not qualify. I would imagine that Taleb has someone in mind who thinks of himself as highly rational and intelligent but who vastly underestimates the amount of uncertainty inherent in certain kinds of speculation. By that definition, the fin-bros are “nerds”. But because Taleb is using that word I sort of imagine that he has academics in mind — people who ought to be more self-aware but aren’t in this particular domain, as opposed to people who are just generally lacking in self-awareness. Hard to say.

      • Picador says:

        My apologies. I should be more measured in my tone, given that this is your blog and I do know and respect your position on civility.

        I have friends and relative who are economists. I’m being a bit hyperbolic and painting with a broad brush. I do think that economists should a thick enough skin that they can absorb these sorts of critiques, give the sort of rhetoric that (many of them) like to employ in announcing their own theories, but yeah, I’ll tone it down.

      • Picador says:

        Scott,

        Qualifier on my last reply: it just occurred to me that you’re on my case, not just for busting economists’ balls (fair criticism), but maybe for going after the extropians / Ayn Rand guys?

        If that’s true, I’m kind of torn about how much I agree with your position (although, this being your blog, I will defer to your sensibilities regardless). But Ayn Rand, really? I just re-read your Fnord piece, and while it’s a fine argument as far as it goes, I’m a little confused as to what you propose as an alternative approach. Do we really have to walk through a primer on who Ayn Rand is, what she said, why one might find her ideology distasteful, why we think that someone who subscribes to her ideology might not be the best person to listen to when it comes to social policy, etc? It seems a bit time-consuming, and Wikipedia is a click away.

        Also: doesn’t everyone do this? Language is a heap of pointers to pointers to pointers. It’s all shortcuts and heuristics, all the way down. Yes, it’s better to be fight against the current and try to be clear and explicit rather than taking shortcuts and trying to bundle a bunch of stuff into a Fnord. And if someone has never heard of Ayn Rand, I’m happy to walk through the whole argument from first principles. But it’s a bit like saying, “I’m a psychiatrist and I got some hate mail from Scientologists” or “I was giving a lecture on astrophysics and a flat-earth guy showed up to heckle me”. Is it always necessary to tack on a 300-word explanation of why engaging with these people’s arguments isn’t really a great use of time? Doesn’t that pretty much render the whole “not engaging with their arguments” thing moot?

        Anyway, not sure this deserves a response, but it seems to me that you’re happy to dismiss SOME arguments out of hand based on where they come from, and I’m curious where you think the line should be drawn.

    • pontifex says:

      I disagree with a lot of things that various Extropians have said, but I can’t see any reason to hate them.

      Academic economists. Extropians aside (and there is no doubt substantial overlap),

      “Substantial overlap?” Can you name even one academic economist who is an Extropian?

      Economics will probably never be as rigorous as physics or mathematics. But in popular culture, it comes across as much worse than it really is because political leaders tend to ask for the wrong things out of it. They want a crystal ball, not a toolbox.

      • baconbits9 says:

        Economics will probably never be as rigorous as physics or mathematics. But in popular culture, it comes across as much worse than it really is because political leaders tend to ask for the wrong things out of it. They want a crystal ball, not a toolbox.

        I think this is overly generous to politicians. Obama didn’t survey 100 economists asking how best to improve health care in the US, he ran on a platform of increasing access to certain groups in certain ways and then after winning he sought out economists who were generally in favor of it. He sought out the confirmation bias that that Taleb decries in TBS.

  26. Nietzsche says:

    Like some of the other commentators, I couldn’t finish Black Swan. I got about halfway through and gave up. The idiosyncratic structure I could have lived with, but the book is filled with “all philosophers are idiots and morons” followed by praise of Russell and Popper. That kind of inconsistency, along with Taleb’s monstrous ego, made me bail out. “Black swan” itself is a catchy handle for an idea that is useful to possess, but it is hardly the world-beating concept that was advertised.

  27. Rusty says:

    Gigerenzer also critiqued Kahneman’s ideas pretty strongly. Part of the critique was that a lot of the theory relied on how the questions were formulated and that in real life (when not being mucked around by cleverly constructed questions) real people did pretty well. Anyway http://fitelson.org/probability/vranas_2.pdf

  28. baconbits9 says:

    My understanding of Taleb is as follows.

    1. He isn’t a particularly gifted writer, he might even be seen to be a bad writer (or is filling his books with entertaining but low signal anecdotes to boost sales).

    2. The idea of a “black swan” is a highly simplistic observation that is used as an analogy, the discovery of black swans in Australia did not cause a market crash and no one lost their shirt selling black swan futures. The discovery of swans with black feathers wasn’t correlated with anything other than parts of the world being unexplored. Taleb is talking about Black Swans* which are correlated events. If I were to go further I might reduce his major insight to the observation that everything in your life is correlated, in ways that are out of your control.

    So what is a “Black Swan” in this sense? Say in 2007 you looked at a person who had followed pretty standard investment advice. He put money into his 401k from every paycheck and had the money in broad based index funds, he owned a house that he bought 3-4 years ago and between price appreciation, down payment and principle payments has a nice equity cushion, he keeps around $10,000 in cash in his bank accounts and has health insurance through his work. This person can handle many individual events. If the stock market drops he can ride it out, if housing prices fall he can make his mortgage and not be force to sell, and if he loses his job and his income is temporarily lower he can burn through cash and either sell stocks or borrow against his house while he readjusts his lifestyle. His personal Black Swan is getting fired in 2008/2009 when his house is dropping in value and the stock markets are tanking. If he had 30% equity in his house in 2006 and faced an average price decline that cushion is all but wiped out by 2011/2012, if he was in an area with steeper declines, such as Las Vegas (an extreme example) his equity hits zero in May of 2008 and his equity is -35% by mid 2009. At the bottom his equity (if he held on that long) might be as low as -50%. Not only is this guy struggling on all fronts, but he can’t even benefit from the opportunities at the bottom. He doesn’t have any money to put into the depressed market to grab the upcoming gains, he might not even have been able to hold onto his investments and see them bounce back as he will be faced with pressure to sell and cover his bills. He won’t be able to take advantage of lower interest rates to refinance his house as he doesn’t have the equity to qualify, and he is going to struggle to find the time, money and motivation to add the new skills that would allow him to take advantage of the new well paying jobs the economy will eventually start creating.

    This is a personal black swan, the steamroller in the “picking up pennies in front of the steamroller” isn’t a metaphor for a $1,000 loss, its a metaphor for the potential for that $1,000 loss to compound with other economy wide issues to crush your lifestyle. If you read someone who discusses gains from the sale of options vs gains from the purchases of options then they have missed this point, Taleb is not discussing financial gains in terms of “making the most money” but in terms of having the kind of life you want. Hence his goal to get to “eff you money”, not his goal to be a billionaire.

    For the semi typical guy described above the market crash was a personal Black Swan that he didn’t anticipate, create and had not much hope of knowing about sufficiently in advance. For Taleb though the crash of 2008 was not a “Black Swan”, when he discusses the hubris and overconfidence of experts he blames them for the crisis. In his view it is not that overconfidence blinds people to the potential for bad luck, it is that overconfidence causes the bad luck. It is not “there is a 0.0001% chance that something terribly bad will happen” it is “acting as if something terribly bad won’t happen, or that it will only happen extremely rarely MAKES the damn thing happen”. If you live in an area that has been stable for a long period of time and take for granted that stability is a feature then when there is trouble you are likely to assume it is short and are further likely to act as if it will be short. Those actions can then be the cause of extending the conflict. If you build mortgage backed securities based to heavily on the assumption that a decline in housing prices is going to be rare then those securities will be the (or a) cause that makes a housing decline a near certainty.

    *Here is where people complain about Taleb’s writing and I won’t disagree with them, but if you want to get to the value in his ideas then you have to accept, and get over, his poor communication.

    • Robert Jones says:

      Say in 2007 you looked at a person who had followed pretty standard investment advice. He put money into his 401k from every paycheck and had the money in broad based index funds, he owned a house that he bought 3-4 years ago and between price appreciation, down payment and principle payments has a nice equity cushion, he keeps around $10,000 in cash in his bank accounts and has health insurance through his work. This person can handle many individual events. If the stock market drops he can ride it out, if housing prices fall he can make his mortgage and not be force to sell, and if he loses his job and his income is temporarily lower he can burn through cash and either sell stocks or borrow against his house while he readjusts his lifestyle. His personal Black Swan is getting fired in 2008/2009 when his house is dropping in value and the stock markets are tanking. If he had 30% equity in his house in 2006 and faced an average price decline that cushion is all but wiped out by 2011/2012, if he was in an area with steeper declines, such as Las Vegas (an extreme example) his equity hits zero in May of 2008 and his equity is -35% by mid 2009. At the bottom his equity (if he held on that long) might be as low as -50%. Not only is this guy struggling on all fronts, but he can’t even benefit from the opportunities at the bottom. He doesn’t have any money to put into the depressed market to grab the upcoming gains, he might not even have been able to hold onto his investments and see them bounce back as he will be faced with pressure to sell and cover his bills. He won’t be able to take advantage of lower interest rates to refinance his house as he doesn’t have the equity to qualify, and he is going to struggle to find the time, money and motivation to add the new skills that would allow him to take advantage of the new well paying jobs the economy will eventually start creating.

      So what? I’m broadly in the position you describe now. Am I making a mistake? If I lose my job and the housing and stock markets crash, I’ll be in a bad spot. I know that. Your guy in 2007 knew that. None of us think that the job, housing and stock markets move independently of each other. Knowing the risk exists doesn’t help, because we still have no way of mitigating it. Or rather, by holding some property, some stocks, some bonds and some cash, we’ve done as much mitigation as we can.

      • baconbits9 says:

        Your guy in 2007 knew that. None of us think that the job, housing and stock markets move independently of each other. Knowing the risk exists doesn’t help, because we still have no way of mitigating it. Or rather, by holding some property, some stocks, some bonds and some cash, we’ve done as much mitigation as we can.

        You obviously haven’t though. If all of your investments are positively correlated with each other the obvious thing you can do to mitigate the downside risk is to make some moves that are negatively correlated with them. This is a central part of (my understanding of) Taleb’s thesis, since nothing is uncorrelated you cannot diversify effectively by going in different asset classes without intentionally selecting for negative correlations. Additionally his point about uncertainty stands, if you don’t know when the next crash is, or how big it will be or its likely impacts on your life and have done nothing to prepare for those unknown potentials then you are creating exactly the blind spot that he decries. You are stating that you know such a risk might exist and then acting as if it doesn’t.

        • arlie says:

          That’s actually quite common. My financial advisors have a lovely set of models, but they solve for 90% chance of success, not 100%. And when considering income in retirement, their instinct is to solve for the 90th percentile of life expectancy – I pushed them to 95, but that’s as high as they would go. They – or at least those who designed the model, and associated software – know they can’t reach 100% – but they consistently act like 90% is good enough, and teach their customers (ordinary folks trying to save for retirement) to do the same.

          All this is probably completely separate from outright “black swans” and other, less unpredictable things with very low risks. (Though the modelers may have put in the fudge factor because of the existence of such risks.)

          The question, of course, is how to account for those other risks. and the real answer is, “you can’t”. If the Yellowstone supervolcano erupts in my lifetime, my retirement plans will be among the least of the casualties. Ditto for whatever other low risk but plausible scenario you care to name.

          We can do better about entirely predictable things like economic downturns. Those will happen; the only thing we can’t predict about them is when the next one will start, and (to an extent) how big it will be. But I’m unsure whether we can do enough better for that to really matter, on an individual level. Some of us have the misfortune to enter the job market in 2008, leading to what appears to be turning out to be depressed lifetime earnings, compared to those arrving earlier or later. We just have to cope with it, if it happens.

        • glorkvorn says:

          If all of your investments are positively correlated with each other the obvious thing you can do to mitigate the downside risk is to make some moves that are negatively correlated with them.

          Unfortunately that’s very difficult to do in practice. All the good investments (ones with a positive long term expected return) tend to be correlated with each other. Like whether you buy US stocks, or European Stocks, or Real Estate, or a small business, or farmland, or high yield bonds, it just doesn’t matter much because they all go up during bull markets and crash during bear markets. At best you have treasury bonds that *might* go up in value during a stock market crash, but most of the rest of the time they’re paying barely above inflation. Or gold, which has no yield at all so you’re just steadily losing money to inflation and storage fees while you wait for the apocalypse.

          My understanding is that Taleb did this by buying deep out-of-the-money put options- options that would pay off a ton if the market crashed, and otherwise just lose a little. That worked great for him in 2008 but- partly because of this book! those options are now a lot more expensive than they used to be, so it’s not clear that it’s still a winning strategy unless you get very lucky with the timing.

          • baconbits9 says:

            I disagree. If you are willing to sacrifice a small amount of upside you can roll ootm puts every year and dramatically decrease your exposure to catastrophic events.

          • Eponymous says:

            That worked great for him in 2008 but- partly because of this book! those options are now a lot more expensive than they used to be

            Isn’t it much more likely that they are expensive due to the crisis rather than the book?

          • baconbits9 says:

            That worked great for him in 2008 but- partly because of this book! those options are now a lot more expensive than they used to be, so it’s not clear that it’s still a winning strategy unless you get very lucky with the timing.

            Taleb constantly harps on exposing yourself to potentially lucky events, I don’t know when it will happen isn’t an effective rejoinder.

          • glorkvorn says:

            Isn’t it much more likely that they are expensive due to the crisis rather than the book?

            That’s the main cause, sure. But the book probably contributed too, plus Taleb being such a… “colorful character” bragging loudly and often about how he got rich buying put options.

          • Chalid says:

            I’m not sure, but I was under the impression that they’ve been expensive since the Black Monday crash of 1987.

          • glorkvorn says:

            I disagree. If you are willing to sacrifice a small amount of upside you can roll ootm puts every year and dramatically decrease your exposure to catastrophic events.

            Taleb constantly harps on exposing yourself to potentially lucky events, I don’t know when it will happen isn’t an effective rejoinder.

            I think you have to put some numbers on this though. Suppose you’d been diligently buying ootm puts every year since the crash. How much would it have cost you, and how much downside protection would it actually give you? I know that’s not easy to calculate but my sense is that if you’re looking to actually profit from a crash like Taleb did than it wouldn’t just be “a small amount of upside”, it would be a large cost every year. So you have to wonder whether it’s actually better than just buying and holding stocks, and accepting the occasional crash as a fact of life.

            Put it another way- why does anyone sell these ootm puts if they’re such a great investment? Are they all idiots? And even with Taleb writing a bestselling book recommending them, the market is so inefficient that they’re still not priced correctly? That seems unlikely.

          • eccdogg says:

            It looks like Sep 2019 2,525 puts are offered at 60.80. That caps your downside at around 13% down move. With the market at 2,904 that is roughly 2% cost per year.

            https://www.marketwatch.com/investing/index/spx/options

            So at today’s prices you would have to pay 2% of return to insure against a one year loss greater than ~13%.

            Over 30 years with $100 invested at 7% return you would have $760, at 5% you would have $430

            *This is incorrect see baconbits correction below.

          • baconbits9 says:

            I think you have to put some numbers on this though. Suppose you’d been diligently buying ootm puts every year since the crash.

            Any investment strategy that starts at the worst possible time is going to look bad, and no one is suggesting that you go back in time and start right after a crash. However I did put some (hypothetical) numbers to it further up thread to demonstrate that you can come out ahead even while ‘losing’ money on the options themselves thanks to opportunistic leverage.

            For example I pay $1,000 for a put option to you every year on January 1st and you turn around and stick that money in the market. Years one through five the option expires worthless, and then year 6 there is a market crash and I cash out my put for $5,000 and put that $5,000 in the market. The simplistic view is that you are currently ‘up’ $1,000 on me, as I paid you $6,000 and you paid me $5,000 out. A slightly better version is that you are “up” $1,000 plus gains from having that money in the market for 5 years. The correct answer though depends on what happened after I put the money in the market. I got a lump sum that was conditional on the market dropping a substantial amount, and there are certainly times where putting $5,000 in all at once out preforms putting $1,000 in 6 separate chunks.

            Following our hypothetical you might have invested $1,000 in Jan 2003 when the S&P was around 900, 2004 at ~1,100, 2005 at 1,200, 2006 at 1250, 2007 at 1,400, and 2008 at 1,450 meaning your average purchase price is (ignoring dividends etc) a little over 1,200. I instead get $5,000 to invest at the end of 2008/early 2009 when the market is around 950. Come 2018 my $5,000 purchase in 2008 is worth a (with the S&P at 2,900) little over $15,000. The $6,000 averaged at 1,200 is worth a little under $15,000.

          • baconbits9 says:

            So at today’s prices you would have to pay 2% of return to insure against a one year loss greater than ~13%.

            Over 30 years with $100 invested at 7% return you would have $760, at 5% you would have $430

            I think this presentation could be misleading. You aren’t only insuring against a loss as you make more as the downswing grows larger than 13%. Insurance is an incomplete analogy as you don’t typically get payouts in excess of the underlying value.

            The 5% vs 7% returns represent a worst case scenario, where your put options expire worthless every year, any one year with a large market drop could counterbalance several years worth of lower returns.

          • Robert Jones says:

            We need to consider counterparty risk.

          • eccdogg says:

            Baconbits, you are correct.

            The 2% is not the correct number. The correct number is the difference between 60 (2%) and actuarial fair value which is a much smaller return difference.

            I still think on average the barbell strategy is worse than buy and hold for a return maximizer, but it is not as bad as my example let on.

            So just out of curiosity I took a look at historically how a 13% OTM put would have performed. Looking at Shiller’s data on the S&P going back to 1871. A put that capped annual losses at 13% would be worth about 1.26%. I got this by looking at all year over year September to September returns. Right now that put cost right around 2%. So it is more like you are giving up 0.74% of return historically.

            Over 30 years with $100 invested at 7% return you would have $760, at 6.25% you would have $620.

            ETA: Come to think of it that is really not a barbell strategy. That is more like a portfolio insurance strategy. A barbell strategy would be worse on the put side because you would be buying puts with negative expected value and investing the rest in short term treasuries. But I suppose a true barbell strategy is long out of the money strangles so you would own the call side too.

          • baconbits9 says:

            Counterparty risk is typically absorbed by the exchange you are trading on and backed by their assets. An event that bankrupts the exchange is possible and you would be paid out only a portion of your gains in such a case as the exchanges assets would be liquidated and split amont its debtors.

          • glorkvorn says:

            So if I understand correctly, it seems like baconbits9 and eccdogg (thanks for running the numbers) are suggesting that the puts wouldn’t make money on their own, but you could (at least historically) make money by using them to limit your risk so that you can take on more leverage.

            That makes sense except… isn’t it completely against the spirit of the book? It seems like a perfect case of the ludic fallacy, where we’ve got this neat little probability calculation that seems to work, and it’s been tested against the past, and on paper according to what we know it should work, but if anything really weird happens or we miscalculated somewhere then it could blow up horribly on us.

            Which I guess is my main criticism of Taleb… all of his ideas sound nice when he writes about them, but it seems almost impossible to actually put them into practice.

          • baconbits9 says:

            @ glorkvorn

            The purpose of the numbers I have supplied and calculations with them are not to predict or attempt to predict that one strategy vs another with respect to one averaging x% per year and the other averaging >x% per year. The attempt is to highlight the concept and potential impact of correlated returns, as well as their potentially low (but not precisely low) costs.

            I do not view it as my job (nor am I licensed to give such advice) to advise people on their particular risk tolerance, but I do think it does good to make people aware of roughly the expected relationships.

            I personally think the market is beatable currently, and will be tracking on my blog my reasoning and attempts but I am trying to divorce that belief here from the general black swan concept (as I understand it to be) that is under discussion.

          • Robert Jones says:

            @eccdogg, thanks for looking at the numbers. You’re right that you’re describing an insurance strategy, but that is also something Taleb mentioned in the quote (“Or, equivalently, you can have a speculative portfolio and insure it (if possible) against losses of more than, say, 15 percent”). Your 13% OTM put is pretty close to that.

            I think buying 85-90% treasuries and the rest in OTM puts would be mixing the two ideas in a way that doesn’t really make sense. That would likely have a negative real return, and the things aren’t anticorrelated. I’m not sure why Taleb thinks those two strategies are equivalent.

            @glorkvorn, I think you have it. Making anticorrelated investments so as to achieve positive return in all markets is just what LTCM was trying to do and that was an archetypal case of putting too much reliance on your model. One might argue that LTCM just didn’t do it right, but it seems contrary to the thrust of Taleb’s argument.

            @baconbits, I think there’s some tension here between two different aims. In one place you refer to decreasing exposure to catastrophic events, and a lot of the time I have the sense that you and Taleb are talking about that: I should be loss averse because I suffer more from losing everything than I gain from doubling up. We’re talking not talking here about some ordinary stock market fall, which I could ride out, but something like a global financial meltdown.

            Because there’s never been a global financial meltdown, it’s right to say that @eccdogg’s calculation is undervaluing the options. For example (to commit the ludic fallacy), if there’s a 0.01% risk per year of a 99% fall in stock prices, that adds 0.86% to the value of the option, which, hurrah, makes it positive EV. These fancy schmancy models are underpricing the tiny risk of a catastrophy, just as Taleb says.

            Except that, in the circumstance of a global financial meltdown, “counterparty risk is absorbed by the exchange” doesn’t seem very reassuring. In exactly the situation when you most need the insurance, you can’t rely on it. It’s the same reason that actual insurance won’t cover you for acts of war: insurance can’t work if everyone claims simultaneously.

            If we park the catastrophy risk, I’m left with reducing my exposure to ordinary stock market variations, but we’re back in Mediocristan, and I’m not too worried.

          • baconbits9 says:

            @ Robert Jones

            Yes there is a tension, but I think at least part of that is because there is a tension within every investment strategy as you are balancing competing aims and weighing them in uncertain conditions.

            I will disagree that there has never been a global financial meltdown, if the Great Depression doesn’t count then I think your bar is to high. The raw statistics alone are terrifying, and some of the details make it worse and people raised in that era were permanently effected (not long ago I was told about someone’s grandmother who into her 80s would wring out and set to dry used paper towels to be reused, to the compulsive extent of taking them out of her daughter’s trashcan when she visited them).

            I think buying 85-90% treasuries and the rest in OTM puts would be mixing the two ideas in a way that doesn’t really make sense. That would likely have a negative real return, and the things aren’t anticorrelated. I’m not sure why Taleb thinks those two strategies are equivalent.

            I agree that this is not a good strategy, though it would have been a great strategy with long term bonds in the 80s and 90s. IIRC A good number of bonds sold in the 80s ended up earning upwards of 5-6% over inflation for their lifespan. Current bond prices make it hard to see those types of gains going forward.

            My personal strategy is to keep my portfolio split between local real estate, the stock market and some gold and hedging using puts worth ~0.5% of our total portfolio, and expecting to increase that when stress signals increase currently projected to be in about 18 months time.

          • eccdogg says:

            Because there’s never been a global financial meltdown, it’s right to say that @eccdogg’s calculation is undervaluing the options. For example (to commit the ludic fallacy), if there’s a 0.01% risk per year of a 99% fall in stock prices, that adds 0.86% to the value of the option, which, hurrah, makes it positive EV. These fancy schmancy models are underpricing the tiny risk of a catastrophy, just as Taleb says.

            I think you meant maybe 1% instead of 0.01%. But you are right even looking at ~150 years of history there still could be events outside of what we have experienced that are being priced into the options. See the discussion of the “Peso Problem” down thread.

            That is the problem I have with the ludic fallacy. While it is important and very good to keep in in mind it can devolve into a type of Pascal’s mugging. And it really can’t tell us if the market overprices or underprices tail risk.

            I would also say if your strategy has to wait 100+ years to see the event that makes it break even or turn a profit it probably is not a very good one from a practical perspective. “In the long run we are all dead”.

    • j1000000 says:

      I still don’t know what Taleb wants me to invest in, though. Which I’ve never understood, since he’s so insistent on all of us having skin in the game and showing him our portfolio for him to judge, and presumably he wants to help morons like me. (I am the sort of moron who invested in triple short ETFs after reading The Black Swan. Lost some money, though obviously I understand even he himself at the time would’ve said it was a dumb strategy.)

      You say his concern is avoiding ruin, not getting rich — but that is not at all how he comes off in the book, nor I assume the principle on which he ran his fund.

    • Janet says:

      If you look at Taleb’s work across the whole Incerto series, he addresses this. One of his principal points is, personal debt is massively more risky than almost anyone admits, precisely because it can blow up on you for reasons entirely outside of your control, and it eliminates many options for you when things do get turbulent. Taking your hypothetical Joe Average in Las Vegas– if he had a paid-off house, it’s mathematically impossible for him to be at negative equity. If he were renting, he would not face any financial risk past the end of his lease, and even better, he’s free to move to a better location with better opportunities for a new job. Breaking a lease, or even being evicted on a lease, doesn’t even end up on your credit rating in most cases! Taking on debt for a house was taking on large, ill-understood risks for Joe Average. (And, as noted up-thread: houses are consumption not investments.)

      Second, Taleb also points out that there are jobs which are much more “Mediocristan” than “Extremistan”, and you need to factor that in to your plan. Think, cab driver vs. professional musician. Cabbies don’t make huge bucks, but it’s reasonably steady work, and you can easily move to a different city (or even country) and do the same job. Whereas, in your Joe Average scenario, a very great deal of his personal life was tied up in one job, with one employer, in one city– his salary, his healthcare, and his ability to carry the debts on his house. Joe was taking on much more risk, wholly out of his control, than he adverted to (or was advised to– although, I must say, every advisor I’ve ever heard insisted on not merely $10,000 in the bank, but a full 3-6 months’ salary at least…)

      So part of your personal “barbell” strategy is to have more than one option for income streams, with the “minimum” income needed to keep you afloat coming from “Mediocristan”. You can do it by having 2 (or more) substantive skillsets yourself– the wanna-be musician who also drives a cab to put food on the table– or you can tag-team with your spouse (one of you has the boring pay-the-bills job, the other has the shoot-the-moon job). Even better, do both, and do it before you really need it, and think really hard about finding two options that are as uncorrelated as possible (the demand for cabbies isn’t very correlated to demand for musicians; but two jobs in the same general field are likely very correlated and won’t make a good choice for risk mitigation).

      Third, Taleb points out that some situations have limited downside, whereas others have an unlimited downside– you need to actively seek to remove those situations with unlimited downside, and mentally plan out how you would respond to the finite losses that remain. Again, if Joe Average owned his house outright, the absolute worst he could do is reach a value of $0– even if the Mongols invaded. If Joe was renting, he’d have very little downside at all. Don’t chloroform yourself about “typical” or “historical data” or “value at risk calculations” or whatever, since you really can’t predict the unpredictable, nor can you avoid it. Know what depends on what, know what the total worst possible result is, and have as much spare capacity as possible to be able to deploy it when/where needed. And all financial advisors should be telling their clients this in the clearest possible terms.

      • j r says:

        Janet,

        This is a great comment. It’s crazy how when people think about personal finance, they so often think about their retirement accounts and what other possible new investment classes they should get into as opposed to thinking about their sources of income and their consumption habits (i.e. where to live, whether to rent or own, etc.).

        The median American probably spends ~30% of their income on housing while saving 3%. Even if you change that to the median investor, the savings rate maybe jumps to 10-15%. If you want to maximize your personal wealth, focus on the big numbers: your income, your housing costs, and your lifestyle.

        And instead of buying exotic options or trying to pick the penny stock that’s going to go through the roof or buying into a Chilean gold mine or whatever, try finding hobbies or side hustles that could turn sweat equity into a productive asset. Or find a cheap house in an underappreciated area that you can buy for cash or pay off in 10 years instead of a 30-year mortgage that’s costing 40% of your take-home pay in the “desirable” neighborhood.

        We make ourselves fragile or anti-fragile not so much by the position of our retirement portfolio, but by the myriad of lifestyle decisions that we make every day.

    • Steve Sailer says:

      Yes, I never understood the “Black Swan” metaphor for scary risks that are somewhat more common than experts assume. I’m not scared of swans and black ones are common the Antipodes and not all that rare these days in ornamental settings elsewhere.

      The “Black Death” might be a better analogy for what Taleb is talking about.

      • Hanfeizi says:

        The idea isn’t that they’re scary; the idea is that they’re an unknown phenomenon that none of our previous models accounted for before they were discovered. “All swans are white” was conventional wisdom, the platonic model of swanness… then one day, “whoops!”- there’s a black one- that doesn’t fit our model of swanness that existed up until that point.

  29. Nancy Lebovitz says:

    A very casual search didn’t turn up anything about research on barbell strategies. I’m thinking about tracking past results of hypothetical barbell investors, and it seems very hard to guess what high-risk investments they’d choose.

  30. Doug S. says:

    One Black Swan that a lot of people don’t think is worth hedging against is Acts Of Government. Communist revolutions and other nationalizations of industries are an obvious example, as is stuff getting destroyed in wars, but FDR issued an executive order forbidding the “hoarding” of gold, tobacco companies have a downside risk of legal liability, Standard Oil and Ma Bell got broken up (ironically, the market capitalization of the parts ended up bigger than that of the original company), and Long-Term Capital Management got fucked over by the Russian government when it first defaulted on its debt then made it illegal for Russian banks to pay out on the insurance LTCM bought to hedge against Russian government default.

    • guardianpsych says:

      Taleb is at pains to go through all the major events that the market has completely missed predicting.

      Both world wars being a noticable example that stood out to me.

  31. arlie says:

    This was fun to read.

    I read the book long ago, and what struck me the most about this review was the ways in which Scott noticed things I no longer remember, such as the railing against some group that Taleb chose to label as “nerds”, which frankly doesn’t match those I usually see tagged with that label. But then, lots of people who want to be supported because they act “cool” and/or “high status”, throw in an obligatory dig at lower status, uncool people as part of performing “coolness”.

    What I remember most about this book was that it gave us all a clear explanation and associated soundbite for something some people may have already understood, and others were groping towards. I suspect the soundbite might be its main source of fame 😉 Not that it’s a bad book – I remember it positively, if vaguely – but just that a lot of books are read, perhaps reviewed quite positively, and then mostly forgotten.

    One thing I learned in (humanities) grad school was the uses of books everyone cites, and convenient sound bites. It’s not something I’d have thought about in STEM. But they make it so much easier to keep an essay short ;-(

    However good or bad the book, Taleb has managed that contribution. Thank you, Scott, for a review that reminded me of more than just its commonly cited soundbite(s).

  32. fluorocarbon says:

    He also mentions – and somehow I didn’t know this already – that modern empiricism descends from Sextus Empiricus, a classical doctor who popularized skeptical and empirical ideas as the proper way to do medicine.

    For what it’s worth, the English word “empiricism” ultimately comes from the Ancient Greek word ἐμπειρία (empeiria) which means “experience.” Sextus Empiricus’s name in Greek is Σέξτος Ἐμπειρικός (Sextos Empeirikos) which literally means “Sextus the experienced.” In this case, it probably means something along the lines of “Sextus who belonged to the Empiric school of physicians” or “Sextus who belonged to a school which is characterized by experience.” The school wasn’t named for Sextus. It predates him and and he was named after the school.

  33. Peter Shenkin says:

    Just another example of an embittered (though not violent) attack on a critic.

    In the 1980s I saw the great jazz vocalist Billy Eckstine live at a jazz club. He spent quite a bit of time talking about the failure of his big band, which he organized in 1944, and which included such luminaries as Dizzy Gillespie, Miles Davis, and Sarah Vaughn. Most of the reminiscence, however, was spent in a tirade against the famous jazz critic Norman Feather, who just didn’t happen to like the band, and said so in print back then. Eckstine explicitly blamed “Norman Featherhead” for the demise of his band.

    So Eckstine spent a considerable number of minutes on stage in an embittered rant over a critical review which had appeared over 40 years previously! He could rather have added another tune to his performance, for the benefit of the people who liked him as an artist (or they wouldn’t have been there) and had paid to hear him sing. I thought this was rather ungracious of him.

  34. Alex Zavoluk says:

    I don’t know to what degree the project of “becoming well-calibrated with probabilities” is a solution to the ludic fallacy, or a case of stupidly falling victim to the ludic fallacy.

    I think in practice the problem (or at least one major problem) is that determining if you can reliably distinguish between probabilities of .1, .01, .001, and .0001 (or even smaller) requires far more data and experiments than one could reasonably gather. Another problem might be that for events like “massive economic crisis” you should never assign probability lower than, say, .05 per year, regardless of how well-calibrated you seem to be, which isn’t fine-grained enough for any sort of complex strategy.

    I wouldn’t worry about Taleb not being good at theory, or undervaluing theory. He is perfectly willing to writing theory papers.

    Interesting point about the difference between Taleb and the rest of the “rationality” writers.

  35. JASSCC says:

    It has been a while since I read the book, but I thought the point about empiricism was something along the lines of suggesting that it’s more fundamental and crucial than a fondness for theories, though theory building is easier and more seductive.

    I don’t recall if Taleb said this, or if it came from my own ponderings, but this is where I ended up on the subject, after reading the book:

    Consider an empiricist and a theoretician of the renaissance both interested in an understanding of, say, the variety and causes of leaf shapes. Both start with only ordinary experience of leaf shapes. The theorist knows a few and thinks hard about plants and sunlight and air and temperature variations and soil types, and thereupon concocts an elaborate theory of the possibilities and causes of varied leaf shapes. The empiricist, meanwhile, goes out and looks at an enormous number of leaves and writes a big book of leaf observations, together with some hypotheses, not yet advancing any unifying theory.

    The empiricist is now in a much better position to use the observations to spot errors in the theory than the theorist is to be able to use the theory to spot errors in the observations and hypotheses, unless by a nearly impossible stroke of enormous luck the theorist has jumped directly to a highly accurate theory.

    Eventually a person very conversant with the body of observations might be able to advance a very good theory of leaf shapes. But the empiricism comes first.

    • ReaperReader says:

      Doesn’t the theory come first? You state that both start off with ordinary experience of leaf shapes. This requires some theory, probably ill-defined, about what is a leaf shape. Does it include fern fronds? Flax leaves? Flower petals? Moss? Lichen?

      If the empiricist doesn’t have any starting theory about what’s a leaf and what isn’t, they’re never going to be able to finish their book.

      • JASSCC says:

        I would suggest that at the point you have some loose definitions, you don’t yet have a theory, more like working definitions and some preliminary hypotheses. I can agree to the notion that you can’t proceed even with empirical work without some kind of tentative framework of ideas or notions. But I think we we can also distinguish that from a theoretical approach where we leap right to a big theory.

        Another distinction might be that the empiricist, by collecting, collating, sorting, sifting and generally working with the underlying evidence, can see what goes into the data set in a much richer way than a theorist who works from tables of numbers or even richer data like pictures or drawings (in the case of our leaf theorist).

  36. achenx says:

    Tangential, but speaking of John Keats’s gravestone, the Protestant Cemetery in Rome is lovely. Don’t go if you’re ailurophobic.

  37. herculesorion says:

    I liked “The Black Swan” better when it was posited by Michael Crichton using a fictional chaos-mathematician character as a mouthpiece.

    Which seems to me to be the point of the whole thing–that your models will always be incomplete, your plans will always fail to include the tiniest chance of catastrophe, and the point is not what you do when your model is right but how you react when it goes wrong.

    • Chlopodo says:

      I might be simultaneously misunderstanding you, Crichton, Taleb, and chaos theory here.. but I thought the interesting point about chaos theory was that even within a simple, modeled system where the variables are few and all-accounted-for, the system could still destabilize in unpredicted ways? I.e. that the reason bifurcation maps go all haywire isn’t because of some small, disruptive variable not being accounted for, but that the model itself creates the disruption.

      That seems to be a different phenomenon than what you’re alluding to.

  38. rahien.din says:

    Relying on this review, and having read Taleb’s The Bed of Procrustes, I interpret Taleb’s explicit and implicit message as “Don’t fuck with instrumental beliefs.”

    On one hand, Chesterton beat him to it.

    On the other hand, instrumental beliefs are just as vulnerable to Knightian uncertainty as theory is. Maybe that’s because they have to be generated by similar mechanisms – “The fence did not assemble itself” is valid, but so is “The former fences did not disassemble themselves.”

  39. empiricists are likely to fall for picking-pennies-from-in-front-of-steamroller bets, whereas (sufficiently smart) theorists will reject them.

    For some evidence of the pre-book world, I remember my father, almost certainly well before The Black Swan was published, discussing this issue in the context of currency speculation. For many years, you could make a little money every year by buying peso futures, because the dollar price of pesos next year was lower than the price of pesos turned out to be when next year arrived.

    The explanation was that, every year, there was a small chance of a large devaluation, and when it happened you lost a lot of money.

    • Eponymous says:

      And now there are a number of papers using “rare disasters” to explain various financial “puzzles”.

    • eccdogg says:

      Here is is the Wikipedia page on the “Peso Problem” which goes back to the 70’s and is credited to your father.

      https://en.wikipedia.org/wiki/Peso_problem_(finance)

    • JASSCC says:

      Taleb’s position on this seems to be that “theorists” are more likely to fool themselves with their theories by extrapolating from too little data or improper statistical inference, thinking that they know when they are in the position of picking pennies up in front of a steamroller. They therefore create a systematic bias away from believing in the possibility of crashes (and perhaps booms).

      A smart empiricist meanwhile will see that financial disasters that should only happen once in a century happen far more frequently than that and realizes that *nobody knows the true probability distribution of certain kinds of financial disasters*. In other words, the empiricist soon learns that no one can reliably see the steamroller coming. So the wise empiricist tries the opposite strategy to picking pennies up in front of the steamroller: buying a diverse portfolio of cheap things like out-of-the-money options on long shots. These will steadily bleed money year after year until you reap a windfall when an underestimated contingency occurs.

      • eccdogg says:

        But how do you know when out of the money options on long shots are cheap? Perhaps both your modal and mean return are negative.

        In my experience folks love to buy tailish options with skewed payoff and folks hate to sell them. So they are likely money losers on average.

        • JASSCC says:

          To be clear, I have no idea how to do these valuations, and I do not personally do this. I am describing what I took to be Taleb’s strategy, from what he has written, but I take your point. I notice he has also stated it takes years of experience to evaluate options properly.

    • Lambert says:

      Well did the Peso ever unexpectedly and precipitously fall?

      I suppose third world currencies are at greater risk of sudden devaluation (regime change, natural disaster etc.) for various reasons than first world ones.

      • Janet says:

        Yes… for some values of “unexpectedly”.

        The two worst cases were 1982-83, and again in 1994-95, each being over 50% devaluations relative to the US dollar with concomitant severe recessions, hyperinflation, and political violence. Also called the “Tequila Effect”.

        PS, the peso has also lost about 50% of its value relative to the dollar since 2008.

        • Lambert says:

          Looking at the WP article, the relevant devaluation was the one that happened when the peso unpegged from the dollar in ’76.

  40. dark orchid says:

    The power law is something that’s taught in a lot of good courses on risk management and cybersecurity – I first encountered it under the name “threat curve” in some article that had something to do with NATO reassessing its priorities after 9/11. The basic idea is that if you make a plot with y = expected damage if something occurs and x = probability that it occurs and then you plot the risks you’re supposed to be dealing with, your points end up lying around a line something like y = 1/x. In the original example “nuclear war” was on one end of the curve and “terrorist with knife” on the other end, with conventional wars and larger terrorist attacks in between.

    From a risk management perspective, since risk = expected damage in case of accident * probability that accident occurs, the curve xy = C is an iso-risk curve, that is all points on such a curve are equally risky. Once you’ve mitigated as best you can everything with a risk factor above some constant C (acceptable risk), you’d expect the residual risks to follow roughly a power-law distribution. If you have an outlier with a much higher C, you pour resources into mitigating that first (at least you do if you’re managing risk rationally) and if you take this strategy you eventually end up with the power law for remaining ones.

    Another point of teaching this plot is usually to remind people that all points on the iso-risk curve deserve our attention, not just (depending on our preferences) ones with high x- or y-values.

  41. adamshrugged says:

    I’m surprised you’re into him, Scott. A lot of his writing seems to me exactly what you argue against in on overconfidence (and other places): that retreating to “Knightian uncertainty” is just a lazy way of avoiding doing the hard work of reasoning over the crazy amounts of normal, Bayesian uncertainty in tough real-world situations. I mean, do you think he’s right that not all uncertainty can be captured in probabilities? Which Bayesian axiom do you want to abandon?

  42. kominek says:

    (i have not read taleb. i don’t feel like i’m missing out.)

    why doesn’t this investing-in-things-hoping-for-good-black-swans strategy not just devolve into buying a total market fund?

    i don’t know which pharma company will cure cancer, or when. am i just supposed to buy a single one chosen with a dart board? if i’m supposed to buy a more than one, am i just hitting that dart board multiple times? if we’re hoping for black swans, i don’t see how any analysis can help me. shouldn’t i just buy them all?

    but why would i expect the best black swans to be over in pharma? maybe GOOG will figure out how to upload minds, and make infinity dollars off of that. or a mining company will discover a pocket of unobtanium, and make 2x infinity dollars selling it to GOOG by the microgram for use in the upload process.

    • Matt M says:

      I think this is a legit point – and that Pharma is actually a really bad example of a “black swan” because, as he suggests, everyone knows that the entire business model of pharma is to hope for the home-run billion-dollar drugs.

      As such, some pharma company finding a great cancer treatment and making billions isn’t a black swan at all. It’s not something that nobody expected to happen. It’s something that virtually everyone expects to happen. They don’t know which company will do it or what the treatment will be or when it will be discovered, but overall, it’s not a crazy and random and unexpected event. It’s an entirely expected event.

  43. amaranth says:

    > If you have a room full of 99 average-income people plus Jeff Bezos, Bezos has 99.99% of the total wealth.

    conflating income with wealth is extremely hazardous to your epistemics

  44. Bugmaster says:

    I am not sure this is true – my last New York taxi driver spent the ride explaining to me that he was the Messiah, which seems like an error on some important axis of reasoning that most intellectuals get right.

    That is only true of your cab driver was not, in fact, the Messiah. You might argue that the probability of him actually being the Messiah is infinitesimally small, but… well… I’d refer you to the subject of this article 🙂

  45. flynnkdflynnkd says:

    I really enjoyed your review, and learned a bit, thank you. 🙂

  46. deciusbrutus says:

    On the discussion of investment risk:
    The high-risk investments aren’t the ones that have a small maximum downside. The high-risk investments are the ones where you are exposed to unbounded downside risk, and also unbounded upside risk.

    Most of those involve becoming a principal agent in some kind of endeavor.

  47. Jiro says:

    If you have a room full of 99 average-height people plus Yao Ming, Yao only has 1.3% of the total height in the room. If you have a room full of 99 average-income people plus Jeff Bezos, Bezos has 99.99% of the total wealth.

    Suppose that instead of directly measuring height, you measured the probability that a basketball team made up of clones of that person could win against an average team. It might then turn out that the basketball-success-height-values are very unevenly distributed and that Yao Ming has 99% of them, even if he doesn’t have 99% of the height as measured in inches.

    I could also frame a way to measure Jeff Bezos’ wealth that works the same way in reverse to show that everyone has barely varying values of a wealth-measure even if they have widely varying values of another one (dollars).

  48. Douglas Knight says:

    Taleb wasn’t trading his own money in 1987. In 2000 and 2008 he may have been partly trading his own money, but mainly other people’s. I think he claims that 1987 was an accident and he’s spent the rest of his career trying to recreate it. (This guy makes very specific claims about what he was doing in 1987, making it sound more intentional. Here is Taleb making it sound intentional. Maybe where he said elsewhere that it was accidental, he meant not systematic, but a sideline to his market making job.)

    Empirica (1999–2004) allegedly made 60% on the dot com crash of 2000, but lost money in 2001 and 2002. I think it was net negative over its lifespan. I’d think that 9/11 would be more of a “black swan” than the dot com crash, yet Taleb lost money on it. This seems very strange to me. Maybe it was a bad implementation of the strategy. (Some sources say that this was mainly his own money.)

    The publication of this book in 2007 allowed Taleb, or rather his protege Spitznagel, to try the strategy again, in the form of Universa (2007–). That was pretty good timing, to start just before a market crash, although maybe that was intentional. Anyhow, the alleged 100% gains of the first year carried this fund much farther than the first fund, so it’s probably a better implementation. (It also allegedly made money in 2011 and 2015.)

  49. nameless1 says:

    NNT is a genius, but tends to focus too much on showing off his brilliance by touching a gazillion topics instead of digging deep in a few. I have heard this is stereotypical of Francophone intellectuals and this is why people tend to either love or hate them, based on whether they like shows of brilliance or actually want to learn something. Is this true? I stolidly refuse to learn languages where “houx” is pronounced as “u” because it feels like something an epic troll came up with, so I cannot really verify it.

  50. j r says:

    Taleb is such an interesting character even aside from the Twitter tirades. The thing that fascinates me is he is known for a specific set of insights about systemic risk and the misapplication of academic knowledge (which I think are pretty insightful), but his whole persona is based around applying those insights in a bunch of random ways that don’t have much in common except that he uses them to support the things that he likes and disparage the things that he doesn’t like.

    So, for instance Taleb is a practicing Orthodox Christian and speaks fondly of those versions of Christianity that practice fasting, because suffering is a form of skin in the game. OK, sounds nice, but what does that actually mean? Does the fact that some people suffer for their religion make their beliefs more “true” or more helpful?

    It’s also not all that clear how his political opinions flow from his beliefs about anti-fragility and skin in the game. It’s one thing to call Richard Thaler an “intellectual-yet-idiot” and Noah Smith a BS-vendor, but I’ve seen him approvingly re-Tweet Mike Cernovich, one of the biggest voices behind the very real BS of pizzagate.

    I’ve seen Taleb say that he knew Trump would win the Republican nomination because he had more skin in the game than the other candidates and that voters connected with Trump because he was an entrepreneur who had tried and failed and succeeded at a bunch of different endeavors. That makes some sense on the surface, but it doesn’t hold up to much scrutiny. What was Trump’s skin in the game? He put some of his own money into his campaign, but his campaign also bought a whole bunch of goods and services from his various businesses. It’s probably more true that Trump did well, because he didn’t really have much to lose. The publicity he got from running was probably enough to make his investment of a few million dollars worth it. And not being a politician meant that he could violate political norms at will, which is probably what gave him his biggest advantage. That would seem to be the opposite of skin in the game. He was playing with house money.

    And maybe Trump has been personally anti-fragile in that he failed a bunch of time, but has managed to remain personally wealthy and move on to the next thing. But the actual businesses that he ran look like they were designed to fail and fail in such a way that he could extract the maximum value for himself and leave his partners and contractors and lenders with most of the losses. That’s great for him, but it doesn’t exactly signal that he’s going to be great for the country.

    • Lambert says:

      > suffering is a form of skin in the game

      Is that really the case? I’m fairly sure that skin in the game is about suffering iff you’re wrong. Fasting is no more nor less pleasant depending on whether or not God exists.

    • mikks says:

      I remember Taleb’s point about Trump a bit differently. I think he defended Trump´s bankruptcies saying that these show that Trump has lived a life with skin in the game and in Taleb´s view it is a good thing.

      • j r says:

        Yes. And my point is that is a really stupid use of the concept of skin in the game.

        ETA: By comparison, skin in the game is a very useful concept when talking about systemic risk. A financial system in which banks keep most of the mortgages they underwrite on their own books and investment banks keep a significant portion of derivatives that they create on their own balance sheet is probably going to be more stable than a system in which banks are originating mortgages and securitizing every flow they can get their hands on and immediately selling them on to someone else.

        When Taleb takes that concept and tries to use it to say that Trump would make a good president, because he’s bankrupt a bunch of times, it’s the exact kind of BS-vending that he accuses others of.

        • Matt M says:

          A financial system in which banks keep most of the mortgages they underwrite on their own books and investment banks keep a significant portion of derivatives that they create on their own balance sheet is probably going to be more stable than a system in which banks are originating mortgages and securitizing every flow they can get their hands on and immediately selling them on to someone else.

          I don’t see why this is necessarily true.

          Consider the division of labor. Underwriting a mortgage isn’t the same thing as investing in mortgages. Why should we expect the best underwriters to also be the best investors?

          I could make the opposite argument. That the people who specialize in investing (and don’t do any underwriting) should be better qualified to judge the risks of mortgage investment portfolios than those who attempt to do both. That those who invest in such securities from a wide range of potential underwriters should, in theory, be the most familiar with how to recognize and spot potential fraud.

  51. JASSCC says:

    It has been several months since I read the book, but I think I recall that Taleb tried to specify which things are likely to fall in the different realms, Extremistan vs. Mediocristan.

    I paid some attention to this because my bias, having studied statistics, was to expect mediocristan to be something of the “normal” case, pun intended, precisely because of the central limit theorem. Of course the CLT has requirements of independence, etc., but many situations end up looking normal-ish.

    In the book, I believe Taleb points out that phenomena that depend on human interaction are particularly prone to differing wildly from the kind of natural phenomena that land in mediocristan, precisely because humans can influence each other and spread ideas like contagion. So the more you’re in the realm of human decided events, the more you’ll be in extremistan. And, further, you can see the history of science and technology as a process that nearly monotonically shifts our concerns from the (non-human) natural world to the world of human influence and decision-making. Therefore, our own tools and know-how land us more and more in the extremistan for purposes of dealing with phenomena that matter to us.

  52. diegode says:

    Thanks Scott, great review. Have you read “The Book of Why” by Judea Pearl? I think you’ll enjoy it.

  53. IsmiratSeven says:

    Violent assault is no longer such a remote possibility; maybe my considerations should even be dominated by it.

    Violent assault occupies roughly the same space along the Y axis on both graphs. Is there some self-effacing irony I’m missing out on here?

  54. baconbits9 says:

    After reading through the comments I though a couple of more points should be added.

    A Black Swan isn’t just tail risk, or an unlikely event. It is an out of sample event. Say you watched waves crash on the beach for a few hundred (or thousand) hours, cataloging them in all kinds of weather, all kinds of conditions and measured the size and strength of the waves. Then you take your empirical measurements and build a model of what forces would cause this wave pattern and then use that model to predict the largest and strongest waves that will hit this area and then build some structure to withstand it.

    If you modeled waves as a normal distribution after measuring them and seeing that all of your observations fit that distribution well, and then you calculated that the odds of a 100 ft wave hitting the beach was a one in 10,000 year event based on your observations of 5, 10, 15 and 20 ft waves, and you base your engineering partially on that assumption. Now say that waves sometimes have an out of sample outcome, an out of sample outcome is not a tail end risk, it is an event whose frequency cannot be predicted by looking at the sample at hand (which is not the same as not being able to predict it!). Instead of occurring very 10,000 years those waves occur every 100 years without an additional increase in the presence of 75, 50 or 25 ft waves. It isn’t a “fat tail” its a singular spike that doesn’t fit the rest of the distribution. The importance of the spike is that it’s existence cannot be inferred from any sample that doesn’t include it.

    What happened is that you built a model without knowing how much data you need to build such a model, if you had observed one of those 100 year waves you would have build a different model of the forces that generate waves and gotten a different outcome.

    Now add the power law distribution, which is not a distribution of probability of events but a distribution of the impact of those events. If you build a protective structure and then develop along the beach behind it based on your models and state that it will protect the new community against the worst expected event in 1,000 years then the impact of that 1/100 year event that you thought would be 1/10,000 years will dwarf the impact of the other 99 years.

    • Andrew Klaassen says:

      A Black Swan isn’t just tail risk, or an unlikely event. It is an out of sample event.

      This is an important point, and it leads to a follow-up point: Out-of-sample events can become sampled events. Some Black Swans can be made tractable and well-modeled.

      This has happened with plagues, earthquakes, and (your example) waves. It has happened with heart attacks, septic shock, and a whole host of other medical conditions. I’ll talk more about septic shock below, since you can make interesting analogies out of it.

      Scott is poking at the edges of this when he tries to figure out how medical theory (e.g. circulation of the blood, germ theory) fits into all this. I’d suggest that the transition from Black Swan to predictable event often goes something like this:

      1. A lot more data gets collected about something. Maybe this is because Greek and Phoenician traders encounter lots of different ideas and societies as they travel around the Mediterranean. Maybe this is because the printing press makes it a lot cheaper to share data and build libraries. Maybe it’s because steamships make traveling around the globe a normal thing. Maybe it’s because people start cutting open bodies and carefully recording what’s inside. Maybe it’s because medical schools begin keeping detailed records, and sharing them with other medical schools. Maybe somebody invents a microscope.

      2. People start noticing patterns in the data, and correlations between data. They come up with theories, as we always do. One person notes that when people sin more, earthquakes are more likely to happen; God must be punishing people. Another person, with a lot more data, notes that earthquakes tend to happen in geographical clusters. Someone invents the seismograph to track earthquakes, and they discover that little tiny earthquakes that nobody notices also happen a lot, and they often happen in the same geographical clusters where big earthquakes happen. The theory of plate tectonics comes out of left field, and now we have a theory which ties together not just earthquakes but a bunch of other things, too.

      For events which we can only observe and model, that’s where it stops. Earthquakes are in a weird space between Black Swan and predictable: We know why they happen, we know where they happen, but we don’t know all the factors that lead to exactly where and exactly when.

      But if we can do experiments on something, we can go on to:

      3. We identify the probable causes for long-tail events, and we do lots of experiments where we purposely push parameters into the long-tail zone. We subject a whole bunch of mice to infection, and we vary the conditions in lots of different ways, and we discover exactly what turns a simple scratch into septic shock. Because we’re not limited to observation, we can collect data at the long tails and figure out the exact shape of those long tails. The body is a system which is subject to systemic failure, but because we can do experiments on many near-copies of a similar system, we can turn some Black Swans into well-understood events.

      I’ll have to make this a two-part comment, with the second part saved for after work, since I’m ridiculously late already.

    • Andrew Klaassen says:

      As promised: Septic shock. Don’t base your MCAT study sessions (or your investing decisions) on this.

      The innate immune system has sentinel cells spread throughout the body. If your toe gets a sliver with bacteria on it, the sentinel cells will release signaling molecules which a) tell immune system cells which are circulating in the bloodstream to stop and help, and b) tell the local blood vessel walls to get leaky. Leaky blood vessel walls allow the immune system cells which are stopping to more easily exit the blood vessels and go into the affected tissue. The toe gets red and angry-looking as blood leaks into it. The immune system cells get to work destroying the infection.

      This is a useful local response. Events like this are typically independent and have no systemic effects. An infected sliver in your toe makes your toe swell up; an infected scratch on your elbow makes your elbow swell up.

      But… if a large number of bacteria escape into the bloodstream at once, and spread everywhere, the local response can become a global response. All over the body, local blood vessels are signaled to get leaky. Patches appear all over your skin that are red and angry-looking. This leads to a drop in blood pressure. The drop in blood pressure can be serious enough that organs get starved of oxygen and the heart freaks out. 25-50% of the time, death is the result. A healthy local response becomes a systemic disaster.

      There is a (very loose) analogy here to some types of financial panic.

      If one bank suspects one company of fraud and refuses credit to it, it protects the overall economy. It is a healthy immune response. If all banks suspect a high risk of fraud everywhere and refuse credit to everyone, it crashes the economy. It’s septic shock.

      Manias, Panics and Crashes surveys a few centuries of systemic financial failures and proposes a model for some of them. A new idea; then excitement; then exuberance. Two things follow from exuberance: Credit expands as lenders get caught up in the exuberance; fraud expands as fraudsters take advantage of reduced scrutiny. At some point, everybody wakes up to how much fraud has spread through the system. Lenders realize that they don’t know exactly where the fraud is, just that it could be anywhere. They panic and stop lending to anyone. The financial system freezes up.

      The book was written in the 1970s, but it’s an okay description of the 2000 tech bubble (remember Enron?) and a great description of 2008. What was a Black Swan in the tulip market of the 1630s or the Panic of 1873 was no longer a Black Swan in 2008.

      In other words, what happened in 2008 was already understood, at least in outline. It was somewhere in step 2 of the progression in my previous comment: Multiple events observed, understood, loosely modeled. Like an earthquake, we know that increasing fraud and increasingly loose credit will put pressure on a financial system that will result in a disaster sooner or later. We are not dealing with an unknown unknown anymore.

      However, unlike septic shock, we can’t get to step 3 with this particular type of credit-and-fraud-driven financial panic. We can’t explore the long tails with experiments on multiple copies of near-identical financial systems. It’s not a Black Swan, and it’s not completely understood. It’s in between.

      Note that I’m not suggesting that we can eliminate all Black Swans. That would be stupid. There will always be more unknown unknowns. However, we can sometimes convert what were Black Swans into partially-understood or, sometimes, well-understood phenomena.

  55. McClain says:

    For those who were wondering and/or quibbling about Taleb’s use of black swans as an ornithological metaphor: it’s a reference to Karl Popper’s argument about the importance of falsifiability in science.

    • baconbits9 says:

      The metaphor of the black swan is not at all a modern one- contrary to its usual attribution to Popper, Mill, Hume, and others. I selected it because it corresponds to the ancient idea of a “rare bird”. The Latin poet Juvenal refers to a “bird as rare as the black swan”

      • McClain says:

        Nice catch! Juvenal certainly deserves more credit than he generally gets. Serving up philosophical insights in dactylic hexameter is very nearly a lost art these days….

  56. hopaulius says:

    “But the nature of all mental processes as a necessary balance between theory and evidence is my personal hobby-horse…” This leaves out emotion, which according to Haidt and others is the beginning and root of mental processes, the elephant that drives the machinery.

  57. googolplexbyte says:

    A less ludic version of your example:

    This problem also comes up in medicine. Imagine two different drugs. Both cure the same disease and do it equally well. Drug 1 is doesn’t make it any further into the body than the gut lining where it’s absorbed, Drug 2 is absorbed and spread all throughout the body. When patients on drug 1 have some rare interaction it’s always gut related issues, patients on drug 2 don’t have gut issues, sure there’s patient with random unrelated symptoms but no one of them are statistically significant associated with the drug, so let’s use drug 2. But this is plausibly the wrong move, having a drug that causes known issues in a narrow range is safer than a drug that causes unknown issues across a broad range and a good understanding of the theory would make them much more cautious.

  58. pontifex says:

    As Scott pointed out, everyone operates on mental models. They may be simple or implicit ones, but they’re still there. So saying “your model might be wrong” is not insightful unless you can propose a better one.

    Nassim’s suggestions here seem worse than useless. Asking cab drivers for financial advice because of their folksy wisdom is a terrible idea.

    I remember during the Bitcoin mania of 2017, one person posted here saying that their Uber driver was trying to get them to buy Bitcoin. For him, that was the signal that it was time to get out of Bitcoin. He was right.

    • Matt M says:

      I had an Uber driver ask me if I could help explain to him how to buy Bitcoin.

      I (falsely) told him I didn’t know anything about it. I’d like to think I saved the guy some money, but he eventually probably got a rider willing to help him…

      • nobody.really says:

        I had an Uber driver ask me if I could help explain to him how to buy Bitcoin.

        I (falsely) told him I didn’t know anything about it. I’d like to think I saved the guy some money….

        How paternalistic–and kind.

        I think a story on This American Life involved a guy making some spare cash as a telephone psychic. He would tell his callers that they were born under an unlucky star, so they should not bother investing in the lottery or risky investments, and always wear seat belts. Heck, if he’s making up a story, why not make up a social useful story?

    • baconbits9 says:

      Nassim’s suggestions here seem worse than useless. Asking cab drivers for financial advice because of their folksy wisdom is a terrible idea.

      Nassim doesn’t tell you to ask your cab driver for financial advice, he highlights how an uncertain cab driver’s opinion can be more valuable than a certain expert when the expert’s certainty is unwarranted. This is to highlight how weak the expert’s opinions can be in specific circumstances, not to show that cabbies are really financial savants in disguise.