More Information is Not Always Better
Incorrect Knowledge is Worse Than No Knowledge at All
I learned this insight from Nassim Taleb, about whom I've written before, and who has influenced much of my thinking on knowledge and skepticism. Many of his insights seem obvious at first, but once you unpack them you discover they are both profound and controversial. So it is with this one.
Though the term is growing old-fashioned, many of us still think of ourselves as living in "The Information Age." We have more information at our fingertips than ever before in human history. And while that's a great blessing, it's also dangerous, primarily because much of that information is false.
False information which we believe to be true will lead us to make worse decisions than we would make without any information at all. A map of Brussels is no use in Amsterdam. Worse, actually attempting to use the map of Brussels in Amsterdam would lead you further astray than would walking around without a map at all (in which case you’d pay close attention to your surroundings to avoid losing your way).
The internet abounds with useless, downright harmful, false information. Here, I'm not really talking about propaganda (disinformation) and wacky theories (misinformation), though those are clearly problems. People have always held crazy beliefs, and if you went back two hundred years and interviewed people, you'd find many of them believed in conspiracy theories as bizarre as any alive today.
The real problem is the information which we trust, because we believe it has been vetted and verified by the proper authorities. False confidence leads to disastrous consequences. As the scandal in Alzheimer's research grows, the public is only just now realizing the extent to which it has been led astray. False theories, based on false evidence, led to decades of research into drugs which targeted the wrong things, all with the result that billions of dollars were wasted on treatments which did not make people better. Doctors prescribed these drugs and people took them, because they believed the drugs would actually do something to help them. And they didn't.
Less sad, but no less stunning, were the decades spent (prior to the internet) promoting nutritional theories which were fundamentally wrong, and which did nothing to combat the rising obesity epidemic. (Some have claimed these theories made the epidemic worse, but I see no dispositive evidence one way or the other and tend to think other factors were more important.) In particular, the theory that fat (the macronutrient) caused weight gain led to decades of low-fat diets which were high in sugar. It turned out most people would have been better off eating butter and red meat than cereal and low-fat cookies. (Now, the carbohydrates-cause-obesity theory also appears to be falling apart, and while I think it has more going for it than the low-fat hypothesis, I think it will prove to be a mistake in hindsight, too. In my professional opinion, it is more important to focus on the quality of carbohydrates and fat than the overall ratio in the diet.)
It's true that we know more now than in previous eras of human history, but a quick look at the track record of science would show that in every era, people were certain about conclusions which later turned out to be false. It is safe to assume that some of what we believe today will prove likewise. Moreover, the replication crisis (which I will never grow tired of mentioning because of how devastating it has been) demonstrates that the safest posture is one of extreme skepticism towards almost any new scientific claims.1 In some fields, a study's findings are more likely to be shown not to replicate than they are to replicate (meaning the study's conclusion is as likely to be false as it is to be true).
My story, "Studies Show," is a joke. But there's some truth in it. A lot of scientific research is designed and gamed to achieve certain results. Again, this is worse than knowing nothing at all, because if it turns out to be false, scientific information can lead us to design technologies which do not work or pursue strategies which backfire.
We should pause to note that it should go without saying that the conclusion of the preceding shouldn't be to embrace "alternative" health/science/medicine/etc. (which suffers from all the same problems, often to a much greater extent) merely because mainstream science has its issues. But it does need to be said, because most people would rather believe a false theory than assume that they know nothing (i.e., when the map of Brussels doesn't work, they'd rather have a map of Madrid than just walk around without any map at all).
So far, most of the examples I have used have been in science, but this isn't just a problem in science. It's even more of a problem in economics, where we are still struggling to say with any degree of certainty whether or not the China trade shock resulted in a decline in manufacturing jobs, or whether or not economic inequality has materially increased since the 1970s, or whether compensation has declined for average workers since the 1970s. All of these are asserted in common discourse as demonstrably proven, but all of them are contested, and in at least two cases almost certainly false.2
Some people think that AI will solve these problems in economics, but I doubt it. AI may help. But anyone who has used LLMs will tell you that they still make errors. Worse, they are often totally confident in their errors and very clever in their attempts to convince you that their "hallucinations" (i.e., the "facts" they made up) are true. No doubt, as the technology progresses, these errors will go away. But there will always be some knowledge that is unavailable to AI. I take Hayek and Von Mises to be arguing that economic planning is impossible not because it is impracticable to collect all of the necessary information in one place, but because the nature of some of the information itself is such that it cannot be collected or determined (in some cases, without being altered or destroyed). In other words, unless AI is literally omniscient and omnipotent (i.e., a god), it can't solve the knowledge problem. Not in economics and not, I suspect, in science either.
Of course, there are Kool-Aid-drinking researchers at OpenAI who will tell you that AI will become God, but they need to touch grass.
When More Information is Better:
Provided it is true, more information often is better than less. The more we know about the world, the better we are at making decisions. Contrary to the “oppression of choice” hypothesis, we aren’t better off in a world in which poverty and scarcity constrain our decisions. Including poverty and scarcity of knowledge. You sometimes hear someone say that, “it’s better to be dumb and happy,” but this is both silly and false. Happiness has more to do with attitude and temperament and emotional resourcefulness than it does with “not knowing the terrible truth about the world.” The truth is that there is much to be happy about in the world. Especially for those of us who hold a tragic view of human nature, and believe the possibilities for utopia are terribly limited, there is quite a lot going right in the world today (things could be, and usually have been, far worse than they are now).
Intelligence and knowledge have very little bearing on predicting who will be happy. There are plenty of intelligent, knowledgeable people who respond better to adversity than others who are less intelligent and less knowledgeable, and vice versa. The difference between someone who can handle sad or unpleasant information and someone who can’t is attitude.
With that aside, you can always make better decisions about the world if you have a clearer picture of it. True knowledge is better than no knowledge.
My argument therefore is decidedly not that it’s better to know nothing. My argument is that it is best to know the truth, but that the worst is to be certain in what is not true. In cases where the truth is in doubt, it is best to hedge and admit uncertainty.
But uncertainty is unpleasant for many people. We would rather be categorical in our beliefs, and it takes some effort to accept that we don’t fully have answers to many questions and that this doesn’t mean that the questions don’t have answers, or that they can’t be answered. Nihilism is an attempt to turn uncertainty into a categorical rejection – if truth is difficult to come by, there is no truth at all. It isn’t nihilism to embrace uncertainty and chaos, but rather humility. It requires accepting that there is much we can’t control – much that you can’t control, much that I can’t control, much that the experts can’t control, much that governments and corporations and nations and societies and nonprofits and charities and churches can’t control, much that human beings writ large can’t control.
Perhaps someday more of it will come under our control. We can divert rivers and build canals. We can fly through the air. We can send telescopes into orbit to collect pictures of the known universe.3 We know more and can do more than our ancestors could.
That doesn’t make us better than them. It makes us the beneficiaries of their hard work. But progress in the pursuit of truth is often slow and incremental. We should expect that there will be setbacks along the way, and I don’t believe we should expect it likely that we will ever know all that there is to be known, in this life at least.
That realization should make us feel awfully small. In spite of all our many advances as a species, we are still stumbling in the dark. Perhaps we have managed to remove our blindfolds. But the quickest way to tie the blindfold back on is to assume that we have already conquered all knowledge and that the world is ours. Hubris leads only to false knowledge, and false knowledge is worse than no knowledge at all.
Which isn’t the same as saying we should doubt all scientific claims. It’s that we should have a bias towards theories which have held for a long time (and have therefore been more thoroughly tested over a longer period of time, as Taleb would point out), and a bias against newer claims which upend old knowledge. Naturally, there will be scenarios where this principle doesn’t hold, but in general if someone comes along with a claim that flies in the face of decades of evidence to the contrary, we should assume it’s false until proven otherwise.
I’m inclined to believe the evidence shows all three to be false, despite the fact that a majority of people believe them. However, the strongest claim is the first.
But not the unknown universe. Which, perhaps, is larger than we expect. Or perhaps not. You never know.