Originally on Substack
The lump of cognition fallacy
One of the most common bad ways people think about economics is the lump of labor fallacy: the idea that there is a fixed, finite amount of work to do in an economy. The reason this is wrong is simple: doing things leads to more things to do. If you open a car factory, that’s going to generate demand for more mechanics. Mechanics generate demand for more tools and supplies. As an economy grows, there are more things to do, not less. The American economy today has way, way more demand for very specific tasks than it used to. People concerned about immigrants taking jobs sometimes imply that they believe that there is a fixed, unchanging number of jobs in America, and that the immigrants doing jobs will not lead to additional demand for more work. They’re falling prey to the fallacy.
Louis Anslow who runs the Pessimists Archive coined a useful simple extension of this idea: the lump of cognition fallacy: the idea that there is a fixed amount of thinking to do. I see this come up a lot in AI discourse. Like the lump of labor fallacy, the reason this is wrong is simple: thinking often leads to more things to think about.
I was recently sent a funny article in the Guardian about how the author isn’t dating anyone who uses chatbots in part because they’re bad for the environment. Articles like this generate a few easy clicks for my posts. But I was struck by the author’s broader complaint, shared by many people she interviewed, that AI somehow causes people to think less throughout the day. That it’s literally always intellectually lazy to use. Take this quote:
Pereira thinks that using ChatGPT “shows such a laziness”.
“It’s like you can’t think for yourself, and you have to rely on an app for that,”
The article lists a few things that I agree are bad to use ChatGPT for, like writing messages on dating apps. But the author extends this to strongly imply that using ChatGPT at all is causing people to think less, because any cognition the chatbot performs leaves the user with fewer thoughts to think. I would brush this off as silly rage bait if I weren’t also so regularly bumping into the same idea elsewhere. I’ve written about this a bit before, but really want to get across just how strange and obviously wrong this is as a model of how human thought works. I also want to popularize Louis’s name for it. I find myself muttering “the lump of cognition fallacy” pretty regularly now. I want to get across a few intuitions about why there isn’t a limited amount of cognition to do, and why outsourcing cognition is often just a way of allowing yourself to think more deeply and meaningfully rather than avoiding thought.
Is it lazy to watch a movie instead of making up a story in your head?
Choosing to watch a movie is a massive outsourcing of thought. You could just close your eyes and imagine the whole thing in your head. There are so, so many mental tasks involved that you’re choosing to have someone else do for you:
Making up a story
Writing the music
Choosing the lighting, set, decorations, costumes, etc.
Have you ever seen someone watching a movie and thought “they’re so lazy for not just imagining a story?” If not, why?
The reason is that there is not a limited amount of thinking to do about the story. The fact that other people have put in so much mental effort into so many parts of it opens up way more potential ways to think about it compared just making up a movie in your head, for the same reason a large complex economy has way more job opportunities than a simple small economy, even though in the large economy more work has “already been done.”
I remember watching Lord of the Rings as a kid and being totally overwhelmed. I didn’t think about anything else for days. The fact that so many adults had already done so much complex thought about the movie, thought that wasn’t available to me as a kid, just meant that my own thoughts about it exploded in a ton of directions they wouldn’t have if I were just reacting to something I made up in my head. Just like we benefit from specialization in labor, I was benefiting from the cognitive specialization of people who had spent decades thinking about story, images, music, and sets, and this left me with way more things to think about, in the same way the wild level of economic specialization that goes into making a computer just leaves you with way more you can do with it after. Similarly, buying a pencil is a way of outsourcing physical labor to thousands of other people. Just like outsourcing labor to other people is mainly a way to allow yourself to do more in total (buying a good laptop vs. trying to build one yourself), outsourcing cognition to a movie leaves you with way more high-level thoughts to think.
The extended mind
This leads into one of my favorite ideas from philosophy of mind, the extended mind thesis: much of our cognition isn’t limited to our skull and brain, it also happens in our physical environment, so a lot of what we define as our minds could also be said to exist in the physical objects around us.
One simple example is how we use our phones to store information. Storing our friends’ phone numbers is similar to storing them in our memory. In both cases, they mostly stay out of our conscious minds. When we want to retrieve them, they are easily available. Over time, dialing a friend’s phone number might become so second nature that you don’t actively think about it. In the same way, having the number stored might mean you never actually have to look at it, you just select the friend to call. In both cases, you’re causing something like cognition to happen in order to achieve some goal, most of it not happening in your conscious experience. It’s behaving as if it’s a part of your broader mind. It seems kind of arbitrary whether it’s happening in the neurons in your brain or in the circuits in your phone. It’s true that you could lose your phone and therefore lose the stored knowledge, but you could also have a part of your brain cut out. Our thinking is often supported by hugely complex background processes both in our skulls and outside of them. The extended mind seems like a useful analogy to help us see how to get the most out of our thought. Offloading our thought onto the environment is in some sense a method of radically expanding and freeing up our minds.
The built environment
To get more extreme, most of our physical environments have been designed specifically to minimize the amount of thinking we have to do to achieve our daily goals. The reason many parts of society are standardized is to reduce the cognitive load of navigating them. Grocery stores are all laid out in similar ways, keyboards are arranged in the same order, street addresses are systematized clearly, all to prevent us from having to expend precious conscious thought on navigating them. When I approach a door, I have no conscious thoughts at all about how to open it. My active thinking is occupied by other stuff, my arm just subconsciously reaches out to the correct location. In these moments, I have “outsourced” my thinking to the door design, because there was a possible world where I had to actively think about opening the door, and the reason I don’t is how the door was crafted.
Try just walking around your daily life and observing how much of the built environment exists to minimize the amount you need to consciously think about getting what you want. Try to imagine how much additional thinking you would need to do if things were designed differently. You’ll quickly notice that huge swaths of our lives are designed to quickly become intuitive to us, so that we don’t constantly need to occupy our thoughts with navigating our built environments or figuring out simple tasks like making payments or traveling or eating. The fact that I have outsourced so many possible thoughts to my built environment liberates me to think about higher level stuff, the things I actually find deeply valuable about the world. I can spend more time thinking about my friends or my job or interesting ideas or media I like or just pausing to take in the beauty of a tree or a building. I don’t need to constantly plan out how to move through the world, because those decisions have often been “outsourced” to civilization.
Other people
Thought is incredibly social. We learn through imitation. Thought uses shared language and symbols. Culture transmits old tacit knowledge. Every thought I’ve had has just built off the complex thoughts of other people. In some sense I’ve offloaded the learning required to be part of an advanced civilization to all prior humanity. This is all extremely positive-sum. There isn’t less to think about now that humanity has thought through the last 10,000 years of transition and left us with the results at the end.
Institutions
More broadly and abstractly, much of the “reasoning” happening in society in any one time might actually be happening via the mechanisms of market or government institutions or culture, evolving over time, maybe to achieve some higher way of being. This is kind of what Hegel was talking about.
Civilization as offloading thought
This is all so obvious it seems silly to write, but it actually doesn’t make sense under the lump of cognition fallacy. Our ancestors had to expend way more cognitive power just navigating their day to day lives. The modern world has more or less completely removed that need compared to most of human history. With so little of that conscious cognition happening, people who believe in the lump of cognition should predict that people just think way less often now than they did before. But that’s clearly not true. With our minds freed from huge amounts of minor inconveniences, we can spend our days thinking about more interesting stuff. The philosopher Whitehead described this process well when he said:
Civilization advances by extending the number of important operations which we can perform without thinking of them.
I think people who worry about how chatbots literally always involve outsourcing some mental task might not be noticing the gigantic mountain of mental tasks we have already outsourced to civilization, and how this has only liberated us to think more deeply and meaningfully more often. The lump of cognition fallacy causes people to see this backwards: instead of cognition leading to more and more thought, they see it as draining a finite pool, and nothing will be left to think about once it’s drained.
When is it good or bad to outsource thinking to chatbots?
So there are plenty of examples of where it’s good to offload our cognition, because that only frees us up to live more meaningfully and think in more complex ways about what actually matters to us. But there are also clearly cases where it’s very bad to offload our cognition. Things like:
Homework.
Messages on dating apps.
Summarizing a valuable complex book instead of reading it (assuming you had the time and energy to read it and would have benefited from it).
Personal connection and close conversation.
What’s the difference between these and all the good ways we have outsourced our thinking? When is it bad to outsource?
It seems pretty obvious to me that there are actually a few key obvious places where outsourcing your thinking is bad. They all overlap with places where outsourcing your labor is bad. It’s bad to outsource your cognition when it:
Builds complex tacit knowledge you’ll need for navigating the world in the future. Homework is valuable because it trains your brain to see more and more complex patterns in the field you’re trying to learn about. Marinating in cognitive work in a field can give you subtle, hard-to-communicate knowledge about it, or at least gives you knowledge you can easily recall in the future instead of having to look it up. It’s bad to outsource cognition on homework for the same reason it’s bad to hire someone to take your place as a student in a class.
Is an expression of care and presence for someone else. If a friend or partner asks for your help with a problem, they often want to feel your mental presence just as much if not more than they want a technical solution to the problem itself. Paying someone else to make something nice for your partner feels less special.
Is a valuable experience on its own. I wouldn’t want to outsource the thoughts I have after a movie, the simple appreciation of a nice day, or the thoughts in a night with friends. Somewhat outsourcing these can be really valuable (hearing what someone else thinks about a movie might set off my thinking in wildly new directions), but I still want the experience of thinking about these a lot, because the experience of thinking about them is itself valuable and nice.
Is deceptive to fake. If someone’s messaging you on a dating app, they want to know what you’re actually like.
Is focused on a problem that is deathly important to get right, and where you don’t totally trust who you’re outsourcing it to. I don’t hand off very high-stakes decisions to chatbots.
The places where it’s good to offload your cognition to chatbots, where I use them over and over again, are the places where it’s good to offload cognition in any other circumstance, where we rely on the built-up knowledge of other people and civilization, where I won’t gain anything from putting in cognitive effort, and where there are massive positive-sum spillover effects of other people being involved in the thought process.
I was recently around someone complaining about how people aren’t thinking for themselves anymore because of chatbots. They had mentioned that multiple people they knew were using them to find recipes and make grocery lists. Most of my best cooking experiences have been following recipes I find online. My willingness to offload the mental effort of putting the recipe together means I get to do much more ambitious things in my cooking. I don’t think this person would have ever been upset if I had mentioned I Googled a recipe, but because a chatbot was involved, the lump of cognition fallacy was operating, and they had the sense that having a recipe given to you took away a finite opportunity to think for yourself.
Among other things, I use chatbots as research assistants. They do the tedious, repetitive work of assembling sources on a relevant topic, I read and check the sources themselves. I get the benefit of reading about and thinking through the key ideas, they do the drudgery of long boring internet searches and digging through academic literature for relevant information. For the same reason human research assistants don’t turn humans stupid, chatbots aren’t turning me stupid either, even though both involve massive offloading of cognitive effort. They free me up to do the type of thinking that actually helps me learn a lot about the world. In all the other ways I use them, they imitate a good conversation with a subject-matter expert, or a simple assistant to arrange text in a way I want. The lump of cognition fallacy implies that anyone using a research assistant is missing out on some of the thought available for their project. Using the research assistant must make them stupid and lazy. But obviously this isn’t true. Research assistants leave people to think deeper and more thoroughly about the problems they’re tackling. The incredibly positive-sum nature of thought is obvious when the research assistant is a human. But when it’s a chatbot, people suddenly sense some zero-sum way in which, if the chatbot is thinking, they’re not.
When you’re in conversations about chatbots “replacing our thinking” see if the person is legitimately concerned about the chatbot removing some important experience, or if they’re just operating under the lump of cognition fallacy. Articles about this abound with examples. Take this from the same Guardian article:
Noijeen stopped using AI to code, and uses it very sparingly in his personal life. He will make fun of friends who use it too much. He recently met up with an old friend who lives a three-hour train ride away. They decided to meet in the middle. The friend said he would use ChatGPT to find the right spot, but Noijeen just looked at a map. “There’s a city exactly between us,” he said. “Why do you need to ask ChatGPT for that?”
If you’ve never worried about Google Maps “replacing thinking” but you find yourself reacting negatively to a chatbot offering directions, or you regularly check restaurant reviews before going but find yourself repulsed at asking a chatbot for recommendations, consider that you’re falling victim to this weird selective application of the lump of cognition fallacy that seems to exclusively but consistently appear when chatbots are involved. It depresses me that so many people seem to want to swamp their day-to-day lives in the meaningless cognitive minutia of long fruitless internet searches or working solo without any input from outside, because they fear that if they offload the minutia, they won’t have anything left to think about. There’s a whole world out there.