I’ve been thinking a lot about the word “should” lately, and how it shapes a struggle between perception and expectation. The world often does things we don’t expect, threatening our models of how it acts. The word “should” is often our first line of defense against changing those models.
Complexity and Language
The world that we live in is incredibly complex, and all the data that we perceive or could perceive is too much to fit in our heads at once. While we have vast amounts of storage space in our heads, our ability to pay attention to all of it, all at once, is limited. With our senses, we can only collect so much data at once. With our minds, we can only process some of that.
Humans, unlike other primates, can use language to think about things that are neither “here” nor “now”–this is one of the features of language. We use language not just to communicate with others, but also for abstract thought. In order to think about something, or really anything, that isn’t happening right here and now, we have to use language.
It can be easy to assume that because the words in our head seem to erupt in a steady stream, that linear existence is their natural shape. That is not, however, the case. Instead, the words are shaped by underlying schemas.
A schema is a mental map for understanding and interacting with the ideas on a subject. It is a cluster of ideas that are all attached to each other. We might have a schema for driving, another one for ordering in a restaurant, and another for situations of conflict. If cognitive scientists are correct, then these ideas are often organized internally with hierarchical categories.
For example, when I think about the tree outside my window, it is “that tree” but at the same time it is a eucalyptus tree, with all of the associations of that category–such as its distinctive smell. Further, it is part of the general class of “tree” with bark and large size in maturity.
We can take this even further and realize that it is in the “plant” category, with similarities with all (or almost all) plants: roots, leaves with a green color caused by chlorophyll.
“That tree” simultaneously is part of all these mental groups. So, when I see the tree, it automatically connects in my head with all the other things that fit into those categories. I can see the connection between “that tree” and the romaine lettuce on my sandwich, even though they seem to have little in common. I don’t have to think about the links–they’re just there. That is the power of schemas.
Data Manager 1.0
As I said before, the world contains more complexity than we can fit in our heads. We use schemas not only to sort information, but also to guide our actions in collecting it. Where do we look? What do we notice? What’s important, and what’s irrelevant?
Schemas don’t just organize information, they also filter it for relevance and manage its collection. There are probably vast quantities of information “in my head” somewhere but it isn’t stored in ways that let me get to it. After all, I read a license plate on a car in front of me yesterday. That number is stored somewhere in my head. But I can’t call it up…it’s been marked “not relevant.”
Yet there is another category of data–data we didn’t collect. The truth is that we can’t collect every possible bit of data presented to us. We choose how to act, and what to focus on, and much of what we could see, we don’t even really look at.
There were probably other license plates I “saw” yesterday but never focused on. The “driving” schema I use for moving a 1-ton vehicle safely told me that information wasn’t relevant, and I didn’t even collect them.
But these schemas, for all their power, don’t cover every situation. They are not some magical computer program that takes in all data and processes it. Schemas are closer to guidelines for interacting with the world.
Coulda, Woulda, Shoulda
And that brings us back to the word “should.” When we’re looking at the world, and it doesn’t act the way we expect based on our schemas, we often think that it should. Yet this “should” is not some value-free prediction reminiscent of ideal scientific observation.
We might think that there “shouldn’t” be a cop on this road at this time of night, but we’re simply taking a model of the world and hoping it’s true. Maybe we get a ticket, and maybe we don’t. We act on our schema, but it’s a best guess on things–a best guess necessitated by the fact that we’re not omniscient and can’t even process all the information that surrounds us.
Every time the world surprises us, we might say, “Hey, he should have done X.” But what we’re doing (besides expressing frustration at a speeding ticket!) is telling our cognitive model not to adjust to new data.
That’s the danger of the word “should.” It prevents us from collecting data and adjusting our view of the world. If we move toward a model of scientific inquiry, we can replace “I should have X” with “let me try X next time and see if I get different results.”
Yeah, we should do that.